Something I’ve been thinking about here is whether having multi-agent simulations is key to having AGI/progressing in intelligence. Although humans are extremely smart, we are not smart enough to be able to compress all of human knowledge into one brain. There’s just too much to learn.
Is creativity kind of the fact that there are many diverse ways of looking at the world, and you naturally get combinatoric smatterings when everyone is a small world model? You look at the things that aren’t quite right, and you think about how you can fix those holes.
This also lines up with the notion that Nobel Scientists are oftentimes polymaths. They are like multiple people with very different perspectives all wrapped into one human.
Questions
- Is being multi-agent a fundamental property of self-improving AI?
- Is this why ALIFE could make a comeback since this is naturally built in?
- It seems like more intelligence is generally better, but is it possible that there is some optimal amount for any level of knowledge?