This is interesting. is maximum level of indirection the same thing agent researchers are calling planning depth or task horizon, or are you pointing at something different?
and do you think frontier models actually have this built in somewhere internal, or is the apparent depth mostly coming from prompt chains, tool loops, the agent harness and other scaffolding keeping the goal stack for it?