AI Predictions

Maybe it’s the same thing as planning depth, but time horizon is different. Time horizon would be improved by better planning depth and/or max indirection.

But it’s also different from ‘deep planning’ (like deep research) – this isn’t about task decomposition necessarily.

I’m not familiar with what ‘planning depth’ means is outside of explicit planning/task decomp so maybe they are the same thing.

By max level of indirection I mean the kind of thing where a model can ‘see’ a number of steps into a problem that let’s it link nonobvious things or know to go in an unintuitive direction. Before it prints out ‘oh I could try XYZ’, where does the XYZ come from? The context plays into it but isn’t sufficient to explain where the idea to link XYZ comes from.

Yeah I think there is something internal going on. There definitely is some apparent depth from the context (prompts and tools), but it’s also not about context size itself and seems to be somewhat independent of how much is in context.

My guess is that the residual vector (being in a super high dimensional space) contains a lot of like ‘raw information’ (below the level of tokens) that includes stuff about the stack above it (larger goals, position in goal stack, etc) but also about the epistemic context around it (kind of having an ‘awareness’ of many different facts and explanations at the same time and being able to find/detect relationships between them).