AGI, LLMs, CR and CF

The Deutschian position would be that the models aren’t intelligent. The models don’t do evolution and I don’t know any reasonable explanation, besides evolution, of how new knowledge can be created.

I don’t think they have, but I think one could try it. You can have agents brainstorm ideas and other agents critique them and other agents evaluate which ideas are right. You can try to layer a more evolutionary/conjectures-and-refutations process on top of agents. I don’t think this has gotten much attention/effort yet.

There also exist AI techniques that try to use evolution but that’s different than LLMs.

They have tons of examples of debugging in their dataset. They have the change logs of every open source project and they have people go through debugging processes to create training data.

It’s hard to grasp how much data (and money) they put into these things (it’s so much), plus a significant amount of direct human effort.

Coding agents (and some other uses) got better than I expected, so I’m not claiming to know everything. I could be missing something. But I’m not convinced it’s more than a good non-intelligent tool (with significant weaknesses too).