AGI, LLMs, CR and CF

How can we detect when new knowledge has been created, whether by an AGI or a human?

I currently think of new knowledge as something like an idea, goal, context group (IGC) represented by a sequence of information (words, or musical notes, or pixels, or pictures, etc.) that (a) has never existed before and (b) achieves the goal. Is it that simple?

I think it would be good if there was a stake in the ground regarding output that we could actually test against. There’s a danger of declaring AGI prematurely, and also a danger where AGI exists and we treat it badly because we don’t recognize it as AGI.

1 Like