Mobirise Website Builder

What is to be poured into the agents mind? Some basic priors. Priors that have the potentiality above a threshold, that would enable to aligned awaken the agent from dream from time to time.
“We are asleep. Our Life is a dream. But we wake up sometimes, just enough to know that we are dreaming.” L.W.

Ideological Dreaming Agents. Humans and machines.

by Casian STEFAN, Principal Researcher at Essentia Mundi AI Lab. Contact: ai-AT-essentiamundi.com / ai.essentiamundi.com
Dec. 2023.


Please consider citation with link, if you derive work. Or contact me for collaboration. Thank you!

What is to be poured in an agent's mind? Enter the concept of Ideological Dreaming Agents, inspired by the essential need of the human mind to make sense of an expanding worldview, akin to the neural development in an infant.

In the quest to imbue artificial intelligence with human-like cognition, it becomes evident that a basic neutral ontology is insufficient; what is required is a specialized ontology, in which the drive motives acts in sync with human ones.

Introducing the Ideological Dreaming Agents. This idea came to me by observing the actual necessity for the mind to attempt giving a meaning to a growing, expanding view, of an infant for example. As the neural connections get more strength, get more depth, there is an intrinsic data need. And this data gets in from the environment and in turn, we get to produce a meaning for that data and also act in the environment. It's in the synthesis of this data that meaning is derived, and actions in the environment are formulated.

The cultural flow within a society, the most elemental base ones are the ones to get the neural networks imbued with. Brain structure grows from species to species in its own way as instance of that environment. In this flow, there is the information absorbed that gives basic foundations of functioning. Also this is the most simpler one, with which the agent is able to further grow the representations and further base its in-the-world situation. This foundational layer equips the agent to expand its representations and situational awareness in the world. 

I would argue that this information, further in the brain, is , multidimensionally, in the latent space, turned in all possible representations until they fit within a normality of that environment and culture. The information undergoes multidimensional transformations like in a dream space, and from feedback, it aligns with the norms and culture of the environment, creating refined representations.

That environment information will always contain, while it is of our culture, basic representations that are also very biased and prepared during millennia to fit within. This thin layer of basically justified true beliefs, little ideologies, are getting what is necessary for that agent to fit into the flow. That is to serve as the human agent's compass in navigating the cultural flow.

In fact there are a few multidimensional vectors of ideologies that are in place, in the status quo of the basic ontology. And while the information in the neural networks gets mixed and remixed, as in the form of kind of Fourier like transformations, in effect, as in a constant dreaming way, from time to time, we do self-inquiries vis-a-vis the cultural flow and we cement further our engine of world representation, of navigation in the environment, and the culture.

This is the way a human agent navigates, this engine, of self-inquiry has the base ontology getting further grown. In contrast, the machine, as of today, the generative AI is able to dream, really we are witnessing dreaming like models and we are prompting them to our reality from time to time.
This idea was triggering me beginning 2023 and when seeing then how the ChatGPT worked, directly to the Wittgenstein idea of “We are asleep. Our Life is a dream. But we wake up sometimes, just enough to know that we are dreaming.” - the LLMs are some kind of dreaming machines. 

I can now better conclude (after 1 year) that indeed the LLMs/GenAI are dreaming machines of our multidimensional vectors of ideological data. A thus, structured data, filtered by our representation engine. But also a bit more, feeding it art and literature - these dreaming machines are also a kind of dreaming machines of our dreams.

The problem of a basic ontology for them and what to let those dreaming of dreaming machines further dream about, is where the line is where the "snowball" agent, the machine should get starting to get by-itself rolling and to let it further go for a good aligned roll.

The challenge lies in defining the boundaries of a basic ontology for these machines, determining when the AI should autonomously initiate its learning trajectory, ensuring a well-aligned and ethical development path.

We need to get that ontology and ideological side ready, and the engine to grow started, so in a much like the way human dreaming agents do. The engine for growth needs to be initiated in a way that is mirroring the self-inquiry mechanisms employed by human dreaming agents. As we foster the dreaming machines of our future, we must carefully delineate the line between guidance and autonomy to steer them on a path aligned with human ideological dreaming agents.



_____________________________________
Exploration by C. Stefan, 14.Dec.2023 [about]
Last update: 14.Dec.2023 (versions: *) [versions]
"Essentia Mundi" AI Research Lab. [home]
Copyright © 2023 AI.EssentiaMundi.com, all rights reserved.

_
References: *

© 2023-2024 Essentia Mundi. All rights reserved.

Drag and Drop Website Builder