AGI, LLMs, CR and CF

cool, that clears it up.

going back to the AI angle. in your epistemology article you listed the hard subproblems for AI:

  1. represent ideas in code
  2. represent criticism in code (implied by 1)
  3. detect which ideas contradict each other
  4. brainstorm new ideas and variants

and then the really hard problem: when two ideas contradict, which one is wrong?

it seems like LLMs already do a decent job at 1, 2, and 4. they can represent and generate ideas, generate criticisms, and brainstorm variants. 3 is partial, they can often spot contradictions but not reliably. the part they can’t do is the judgment step we were discussing earlier, evaluating which idea to reject when they contradict.

does that mapping look right to you? and if so, does that change your view on how close current systems are, like they’ve accidentally solved the easier parts and the remaining bottleneck is exactly the judgment problem you identified?