Mobirise Website Builder

Beyond technicalities, what would a priors based ontology look like in order of an AGI Agent be ontologicaly in our world situated?
Although discarded by many, and even by Wittgenstein at some point (in the sense as he extended it, and saw there is more grey shades to the world - beyond, logic, practically there is an absolute multi-modality to words which leads to the "meaning is use" paradigm) one experimental proposal we do employ here at Essentia Mundi AGI, is to start with the Tractatus Logico-Philosophicus-like basic O-Model.

In TLP, Wittgenstein explores the nature of language, reality, and logic.
The book introduces the Picture Theory of Language stating that the meaning of a proposition is found in its ability to represent possible states of affairs (pictures) in the world and meaningful propositions correspond to possible states of affairs. (Related to that concept, I devised ABBILDUNG* artistic project since 2004-2005, so there is only natural to think in that terms about the other side of the coin here.)

Logical Atomism: He introduces the idea of atomic facts as the building blocks of reality. Complex facts are composed of simple atomic facts, and language mirrors this structure. And this we can consider an agents vector spaces that encapsulates atomic facts - that would make a bijective conceptual correspondence in the set of structures in the propositions vs. atomic facts.

The Limits of Language: there are limits to what can be sensibly said and that some aspects of reality are beyond the scope of language. He famously states. Once an agent can recognize these aspects, he can activate a system 2 that enables it to confabulate.

Ethics and Mysticism: the same here, ethical propositions are nonsensical and explores the idea that what cannot be spoken about can be shown.

The contingency of the O-Model based on this contingent view, I argue that it will make the agent to better handle the situations of conflict, false positions, truth allowing a self delimitation (one which humans can't do easily) and that would allow him to pursue in silent mode of thinking to further world constructions that in turn will be verified thrugh feedback from environment or other agents and to updates its model upon this shared slight new views gets more confirmations.

C. Stefan 04.12.2023

Experiment O-Model: basic world Ontology of an AGI Agent (Principles)

by Casian STEFAN, Principal Researcher at Essentia Mundi AI Lab. Contact: ai-AT-essentiamundi.com / ai.essentiamundi.com
Dec. 2023.


Please consider citation with link, if you derive work. Or contact me for collaboration. Thank you!

Occam's Razor - or how can we act like being the modeler of a mind? What mind? And what would be the minimal principles to achieve that.

I always thought about these principles long before knowing what OR was by any definition. I always imagined a glove, where the fingers are those maximal principles that satisfy the minima that the hand holds. From concept A to B, what would it take to get there with minimal effort. What are the things that constitute the B?

Also is there in nature something that it is, but it has no meaning? More of that idea, I think that all that is in nature it has a use. There is a holistic view we should keep in mind.

Having these two concepts to play with and by trying constructing a mind, in simple terms: what would be the A to B. And any other relations to consider.

From the principles of Wittgenstein, there are limits, what limits? - in essence the world can't do "mistakes." What we also can speak it can't also be without meaning. The limits of a world are limits of an agents world.

The current LLMs employ a similar model, in which by essence the model is a copy of our mistake-less world. (Logic and language go hand in hand.)

To address a bit the LLMs - not that they are employing one if not the best model of our world - and by the very limitation they have - they can't be out of the box, can't go out of the distribution - they are and can sub-produce contingent world systems within the human world.

For these models to navigate our world, they have already the knowledge but only with our prompting nowadays, we give them from time to time sparks of worlds building. Akin, we are asleep and from time to time we wake them up to our reality.

If we take the A - our mind to B - a LLM mind, as in a two-heads conversation, what would it take to B to understand A?

A model. More, an Ontological Mode, O-Model.

The simplest model that the O-Model can use, in which all that it is to know about the world, would be the Tractatus Logico-Philosophicus.

Once an Agent has the limits of this world, would be able to have an Occam's Razor mind.

That would be the logical structure of our world he has to grasp first. As a basic O-Model that envelops the core. The second system would be what it is outside that core.

In LLM terms, the very things he can hallucinate. The very things it can confabulate about. But that as inner representations and not expressible in immediate phrases.
A fabric from where the creation of possible words may emerge.

The medium, the agent to interact with, surely has to have the same basic characteristics, so that it can act as feedback, proactive agent.
The medium is already imbued with logic. Medium is to be considered a natural one. (Not some abstract art installation.) Where the LLMs based on the knowledge of the whole world can know what its elements are. The agent on the other hand has to have an instance of the O-Model that is compatible with our agent instance. That is helping for a shared world model to begin with.

So from A to B with the minimal effort for up to the point we can have agents within our world, the Tractatus would make a great basic O-Model.

In order to go to the constructive and even creative side, a more nuanced system 2 is needed. One that would be able to confabulate. One silent, one sparking sub-worlds. One that, by learning is able to update its world view.

A further conceptual separation - a non overlapping framework. 

A safer way seems to be: at best we can do is to look through specific glasses. Not only that any generative platform (on top of, like the OS, not the generative AI term) increases in complexity till it transcends its initial form, it also overlaps with other one's form and some functions. Some diffuses in other systems, in themselves forming totally other systems. And there is no more a causal nexus at play. It is a space that in itself follows an order of another kind. Then science here at its level within the old system, at some point is itself as a tool, useless in explaining the new system. It seems as the science is always at a lost step with the "all", to know all.

As in my framework here, one can then only take the specific glasses to look at the system. That is the frame which we should specify, what glasses to what strata to look. We employ boundaries.
Beginning with the language, as in Wittgenstein: what can be said and what cannot.


_____________________________________
Exploration by C. Stefan, Dec. 2023 [about]
Last update: Dec. 2023 (versions: *) [versions]
"Essentia Mundi" AI Research Lab. [home]
Copyright © 2023 AI.EssentiaMundi.com, all rights reserved.
_
References:
TLP, (L.W.)

© 2023-2024 Essentia Mundi. All rights reserved.

AI Website Creator