Intentionally answering questions wrong
⬅️ [Learn to create ambiguity using an adversarial cycle system](<./Learn to create ambiguity using an adversarial cycle system.md>) | ⬆️ [Ideas](<./README.md>) | [Inder/defer token](<./Inder_defer token.md>) ➡️
Intentionally answering questions wrong
Maybe an important type of haziness is proportion of times that the answer model is inaccurate.
⬅️ [Learn to create ambiguity using an adversarial cycle system](<./Learn to create ambiguity using an adversarial cycle system.md>) | ⬆️ [Ideas](<./README.md>) | [Inder/defer token](<./Inder_defer token.md>) ➡️