AI to AGI Research Lab. (*)
in Sibiu, by C. Stefan.

The portal of the A.(G.)I. Research Laboratory at Essentia Mundi.
"An Agent would be of use only as part of a certain system. If I surround it with the right kind of adjacent cases it becomes better aligned and naturally belonging in that particular environment."

Exploration Articles (*)

The "I" construction as Family Resemblance.
C. Stefan 13-Apr-2024

The moment we "see" an essential feature, the instant it becomes an abstract feature. And then recast it to other definitions. There is not an essential feature to the "I", it is an "externsive" construction. Read more...

A computational model of...(Call For Papers Initiatives)
C. Stefan 04-Apr-2024

A computational model of the consciousness problem should also posit some further structure acknowledgement. A frame of reference, a setting within a space where this can be even imagined. Read more...

Mortal computation. Hope for the halting problem mitigation.
C. Stefan 19-Mar-2024

Going nearer to analog computing with the help of mortal computation. The hardware neural network, that can't be copied. It is mortal.  Read more...

The broadest sense of a structural coupling.
C. Stefan 18-Mar-2024

The broader, in a sense, model. From the observer vs. objects relations (supervenience), to the life dynamics (the flickering of the object on the surface of the ice, metaphor) we can drive one of the broadest sense of a structural coupling (confounding) of the two sides: agent and environment.  Read more...

The environmental self-isolation Detour of Humanity. Consciousness.
C. Stefan 01-Mar-2024

Thermodynamics as driver of change within a system. But within the actual, locally framed, life blanket environment what drives further a human, seem, as the tip of a pyramid in the environment, a self titled master of the Gaia.  Read more...

On the impossibility of devising human-like AGI.
C. Stefan 20-Feb-2024

Sadly, the whole spectra of consciousness seems to have to go (or not thought to be possible otherwise) through this kind of "I" filter, while it could be totally not the case to be as such. Read more...

Normative reasoning. Moral supervenience. Engineering moral AGI.
C. Stefan 12-Feb-2024

A way to fill in and solve the gap between intentional agency and moral agency. Read more...

Innate structures. From vibrations to words & worlds.
C. Stefan 26-Ian-2024

A look at Chomsky's innate language structures one may be endowed from birth to sound structures that are innate in some deep neural networks. Read more...

Latent spaces. Forms and frames.
C. Stefan 25-Ian-2024

There is no problem for the humans to have all that implicitly. I am interested to explore this special verticality, of spaces we are endowed with, with what forms they have, on what frames of references we are grounded in. Read more...

Do we create a model of the world? Are we endowed with one?
C. Stefan 20-Ian-2024

There is a special class of attractors that constitutes the latent space of our "meaning." Read more...

Physical constrains to psychological needs. Science as a need.
C. Stefan 13-Ian-2024

Following the view to consciousness, I ponder to the idea of whether the starting point of it, follows a physical constraint milieu. Read more...

A view to Consciousness.
C. Stefan 01-Ian-2024

No causal nexus, and a psychological need. And that is all about to it. Read more...

Ideological Dreaming Agents. Humans and machines.
C. Stefan 14-Dec-2023

Not that there is an ontological need, a world representations, but this is in the kind of an ideological one. Read more...

An AGI agent with intents, navigating a culture
C. Stefan 01-Dec-2023

Intents and objectives that allow the basic navigation in an environment arise as opposition of death drive. Read more...

AGI with LLM as layer and environment that has "weak" agency
C. Stefan 20-Nov-2023

I would go a step beyond, and add that the environment should also have basic cognitive characteristics. Read more...


Our articles around AI and AGI research. On-the-fly update follows.

Zooms, infinities, scales, refinements

Zooms, infinities, refinements, perspectives, scales, degrees of, compression, frames of reference, patterns, fractals. Pick a reference framework before engaging. Introducing the EM Reference Classes. We mostly see "in concepts."
And concepts implies a web of connections. Edge of chaos, complexity, n-body problems as consequences.

The new generative layer

The CPU is a generative layer to OS. OS a generative layer to Apps. A study in the verticality of stacked layers, of complexity, of systems, of the top of the emerged utility.

Climbing ladders

Finding a way to navigate from bottom to top trough (disposable) ladders. Scaffolding.

Graduality and emergence

A child makes gradual but conspicuous cognitive elevations. Clear jumps to the "higher" levels. Launching the idea that emergence's x-level of a complex systems can't completely explain the (x-n) levels by itself. No levels causal nexus. Cells did not extended their spatial reach "thinking" that something will emerge.

The Gaia complex organism

Current empirical methods to address climate issues are treating the symptoms. How the sub-systems regulate themselves. What are we in the Gaia complex system? The horizontality of biological mass.

The world view

I am this instance, thus I see what that instance can see. Is there more to it? Why we compress and why there is formalization need? Noise and order.

Towards a de-anthropomorphization view

Exercising viewing the entities not as themselves. See beyond the blankets. I do not think. The collection of cells emerged something we see as such.

The cognitively endowed environment

The environment should also have some priors in the space of basic cognitive abilities.

What kind of attractors are out there

From chirality to stigmergy. To the attempt to controlling chaos. To default emergences.

Twins, learning, formalization & AI

Always wonder about entities having roughly the same brain structure, like the twins.

Is there a causal nexus certainty?

Spuriousness, inference, correlation and the causal nexus. Uncertainty.

Minimization of surprise is not beneficial

Working on negative outcomes leads to system decay. Need to maximize surprise.

The better language engineering approach

A gradual bottom up approach, a "game" with the agent and (or) environment. The Sphere. The substrata of the language.

A transformation of the Big Other

As we march towards a "cognitive closure" there will be a transformation of the divine observer.

The hard problem of NLP encapsulation

Language as a first formalization method about our worlds / mental representations.

A survey of emergences

A survey about complex self-organizing systems, emergent patterns. See things up from the very low perspective, single cell, multi-cells, networks, specialized networks.

How would AGI look like

AGI in the next years, some say. But what it would look like when we see it? Probably evolving along with us, maybe hardly recognizable from the perspective of the now, more as natural integration with our culture, visible and arguable from a historical perspective.

Stochastic processes with Fourier transform

How to probe at the unknown, to allow discovery beyond the probability distribution.

An action from a vertical distance

That feeling that you surprise yourself, with just what you said. We store the compression of the representation in the lower levels of the network (train it,) we talk from the lower levels (decompress.) If you want to think about it consciously (there is a trace of it,) but very often either there is a delay in retrieving it, sometimes it's completely or temporarily forgotten.

A Thinking exercise: that we are not Thinking

In the folk psychology sense of the word. Do we really think? When we think about the cells, that they collectively assembled to form more complex organisms, were they intended to create an organ that was going to handle what? What was the (original) purpose of the brain?

Embracing Wittgenstein's Nonsense: The Creative Exploration in Philosophical Inquiry and its Implications for AI Advancement through LLMs

One can not make the immediate parallel of the Wittgenstein's talking nonsense with the current LLMs characteristics. But "confabulation" should be embraced.

Our A.(G.)I. Experiment(s)

The Snowball layers

The layers of the artificial entity. The stance of the snowball(s) within evolutionary mechanisms. Within an "EE-framework." Evolution in Environment, that accounts even for traits of so called consciousness. A the term as it is approached today, as highly dissoluble here. 

The O-Model, a first proposal

The bootstrapping world of the entity. Introducing the first theoretical framework of the experimental O-Model: a basic world Ontology of an AGI Agent with Occam's Razor and  Tractatus principles. Also using the Demarcation Classes Framework.

The Environment

The medium of the Agent. The Agent is not the central activity to be carried for, of the modelling processes. An agent requires far more adjacent "help" than it is currently assumed in the AI / ML landscape. 

The instance of our AGI

Adjust framing, implement rolling within the O-Model, start, refine. A framework of snowballs rolling. An evolution complex that is leveraging the fast iteration epochs in the digital medium.


Occam's Razor. The minimal set of properties with which you can infer a new concept. (The Glove analogy by EM-AI-AGI 01.12.2023). The simplest possible theoretical explanation for existing data (heuristics.) Future data often support more complex theories than do existing data (pick a frame of reference.)

Aporia. At loss on how to (philosophically) proceed. To be resolved by taking a higher perspective "...always take flight to where there is a free view over the whole single great problem. (L.W.)"

Solomonoff's induction previous observations are used to calculate the probability of the next observation. Bayesanism. Derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.
Solomonoff's induction naturally formalizes Occam's Razor.
Application and formal definition: a phenomenon is said it has been explained better than before, then it has a higher compression of the involved observations (by an observers frame of reference.)

Kolmogorov probability. Nothing is not the only event with probability 0.

Markov chain. What happens next depends only on the state of affairs now.

Falsifiability. A theory or hypothesis is falsifiable (or refutable) if it can be logically contradicted by an empirical test. (see verifiability). Falsifiability as defined by Popper means that in some observation structure (in the collection) there exists a set of observations which refutes the theory.

Philosophical understanding is non-theoretical, is a process of gaining clarity. Dissolving confusion, not constructing new theories.

It is our picture of the world that is constitutive for our convictions and our language.(LW.)

Family resemblance. Overlap and criss-cross web of features. Have common features but no one feature is found in all of them. Extended to language games. (LW.)

Non-egological (EM's own perspective, Feb.2024). I is a projection of the language which has its roots in psychological needs. A perspective of I not even as a center of some sort, from adjacent other devices (from sensations to environment.) Wittgenstein, Derrida, maybe further down with Lacan, Greimas, Descartes, Husserl, Hegel, Merleau-Ponty...I am not I. I is even an other...etc.)

In linguistics, surveyable representation can be used to gain an understanding of a population's language and communication patterns. Paleo-linguistics comes as subdomain.

Learning is influenced by both external and internal factors.

Deontology focuses on the action itself, regardless of its consequences.

We try fit mathematical models to our senses. And sometimes the other way around.

Turing machines and the universal computer. On Computable Numbers, with an Application to the Entscheidungsproblem.

Modal logic modes of truth: alethic modality (“necessarily”, "the truth in the world") and "the truth in an individual's mind" (epistemic) “it is known that”, deontic (“it ought to be the case that”, "how the world ought to be according to certain norms"), temporal (“it is always the case that”), doxastic... children learn deontic meanings before alethic ones. Bouletic modality "expresses what is possible or necessary given someone's desires”. “Aporethical logic” means that the conditions of possibility of judgement designate simultaneously the conditions of its impossibility.

Doxastic/Credal. As in the Bible was first Doxastic then Credal. The ‘credal’ level where beliefs are entertained and the ‘pignistic’ level where beliefs are used to make decisions.

Pignistic probability. Make a decision, you do not have all the data but regular probabilities. A pignistic probability transform will calculate these pignistic probabilities from a structure that describes belief structures.

Alignment. If AI rises how to make it in our world. Grow hare and with Empathy.

Causal nexus. There is usually no causal nexus, only the psychological need for a causal.

Causal closure. Universe as a deterministic whole. The universe we make I tend to think it is so. But there is not all that there is. We are on some reference planes from where we apply what we make of about. Besides language that try to capture different levels or frames, so the causal closure is a byproduct of that, and by that, a next-level confusion induction framing way, that still hunts philosophy today (restrictions to dualism, structuralism, etc.)

Compositionality. The meaning of a phrase is determined by the meanings of its parts and their syntax.

Chirality. Not really mirror-symmetric.

Embodied cognition. Acting with a physical body on an environment in which that body is immersed.

Reasoning. Deductive reasoning (propositional, First Order logic) and inductive reasoning (data based, Bayesian). Abductive reasoning is a form of reasoning that involves forming the best explanation for a set of observations or evidence. Modal logics are used for reasoning about necessity and possibility. Temporal logics are employed for reasoning about time and temporal relationships. This includes reasoning about sequences of events, causality, and the temporal order of actions. Probability logics and Bayesian reasoning are used to handle uncertainty and make decisions based on probabilities. Fuzzy logic also falls into this category, as it allows for reasoning with degrees of truth. Epistemic logics are used for reasoning about knowledge and belief. This is important in fields such as artificial intelligence, game theory, and philosophy. Deontic logics are employed for reasoning about obligations, permissions, and prohibitions. This is relevant in ethics, legal reasoning, and normative systems. Non-classical logics, such as intuitionistic logic and paraconsistent logic, challenge classical assumptions and provide alternative approaches to reasoning. Intuitionistic logic, for example, is associated with constructive reasoning. Game-theoretic logics, such as dynamic epistemic logic, are used to model and reason about strategic interactions and information exchange in multi-agent systems.
Analogical reasoning. What is known about one situation can be used to infer new information about the other.

Postulates (axioms) are true statements without need of proof and are the basic structure from which lemmas and theorems are derived (that need proof).

Pragmatism. An ideology or proposition is true if it works satisfactorily, and it is agreed upon.

Meaning is use. The idea that words (and more,) have not a fixed meaning attributed by some high order or entity. It is just as it is, the meaning of a word is its use in the language (more deeply, also in a culture.) A bottom-up symbolism. A game of acquirement of meaning.

Noetic. That inner wisdom, direct knowing, intuition, or implicit understanding.
An extended idea, principle, work assumption that is to follow at EM-AGI since inception: The most basic comes as a consequence of our form in the environment. Like a drop, why does it have the form it has. So the basics about the world, we have it in us, innate substructure, that at some point is further formalized in language, maths, etc.

Investigations (PI). Use of language in various contexts, language games, and the fluid, context-dependent nature of meaning.

Tractatus (TLP). Statements that do not correspond to possible states of affairs are nonsensical and should be excluded from meaningful discourse.

TLP->PI. What is tragic in TLP vs. PI? Refute of old views. But isn't it an evolution (be it cognitive) behind? That the tree only bends, it's not tragic, but that it breaks (everyone insists of what tragic event that is, and so it lands in the history books). But I add always the perspective: from A to B there are grades of grey. For me it's more fair to say TLP->PI (evolution).

Abelian Group. The group operation is commutative.

Demarcation Problem. Not a problem if one acknowledges grey systems. An agent with Tractatus, as base ontology, would entail a better boundary though. (also related to Doxastic / Credal positions through time/culture, GTI)

Multi-armed Bandit Problem. related to Exploration–exploitation tradeoff; Bayesian method.  Not known allocation of "luck," only through tries/time. eg. Which slot machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine.

Ceteris Paribus. A variable, and hold the other things constant. Isolate the effect. See impact.

Fourier transform. But in higher dimensions, eg. the Fourier transform on a non-abelian group takes values as Hilbert space operators. (see Pontryagin duality).

Laplace transform. Integral transform which takes a function and maps it from the time domain into the frequency domain. Transforms ordinary differential equations into algebraic equations and convolution into multiplication. Cochlear inputs (from the ear) which first start as space vs time displacements on the ear drum, end up as frequency vs amplitude plots on the primary audio cortex.

Principle of least action (Hamilton's, Maupertuis's). The path that has the least change from nearby paths. The action is the difference between kinetic energy and potential energy integrated over time. This difference is called the Lagrangian.

Additive number theory. Regularly structured superset property as seen from small sumsets (EM-AI 12.12.23). A set with a small sumset must be contained by a larger set whose elements are spaced in a highly regular pattern. (see Groups, Marton).

Teleonomy. Study of ends or purposes. Body's functions has the purpose in assuring the survival of the organism. Apparent purposefulness and of goal-directedness of structures and functions in living organisms brought about by natural processes like natural selection.

Teleology or finality is a branch of causality giving the reason or an explanation for something as a function of its end.

Adiabatic. A process without transfer of heat to or from a system, adiabatically isolated (in Thermodynamics).

Assembly theory. Search for biosignatures, life. Only living systems can produce complex molecules that could not form randomly in any abundance.

Onicescu’s Informational Energy (vs Shannon’s entropy.) A positive quantity that measures the amount of uncertainty of a random variable: the informational energy is strictly convex and increases when randomness decreases (vs entropy.)

Supervenience a relation that emergent properties bear to their base properties. Related to but distinct from notions like grounding and ontological dependence. Two things cannot differ in quality without differing in intrinsic nature.

Manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space.

Harmonic oscillator. The amplitude typically has no effect on the period of a pendulum. Restoring force: the more an object is displaced from its resting position, the greater the force that wants to push it back to the resting position.

A theory involves making predictions of things that are not directly observable. A model is founded on assumptions about how a thing works.

Meta-learning for compositionality (MLC) approach for guiding training through a dynamic stream of compositional tasks. (Brenden M. Lake & Marco Baroni.)

Self-Assembling Artificial Neural Networks through Neural Developmental Programs - we take initial steps toward neural networks that grow through a developmental process that mirrors key properties of embryonic development in biological organisms. (Elias Najarro, Shyam Sudhakaran, Sebastian Risi.)
Also: Bio-inspired approaches to adaptive agents. (Joachim Winther Pedersen)

NSAI - neuro-symbolic approaches to AI; a synergistic view of both ends. Neural networks, classifiers, statistics, sub-symbolic networks and symbolic reasoning of higher abstract patterns of meaning (like language NLP, formal logic) both to combine and uplift AI systems to a new level. With symbol: a way of aligning, constraining & meaning inducing method, through reasoning symbolic schemes, where only neuro brute approaches reach the limits. The neural part learns how to solve problems from experience, predicting the general direction for problem-solving. The symbolic part performs strict relational reasoning and solving to ensure the correctness of the result.

Predictive coding. Building an internal generative model that tries to imitate the hierarchical causal relationships of the external generative process.

Moral foundations theory (MFT) is a psychological assessment tool that decomposes human moral reasoning into five factors, including care/harm, liberty/oppression, and sanctity/degradation.
(Marwa Abdulhai, Gregory Serapio-Garcia et al.)

Unentscheidbarkeit / Undecidability / Incompletness / Undefinability (of truth) is a problem at the threshold of definability of a formal thought against the analog. The level of the formalism still does acknowledge the dx/dt approximation. Formal system is an abstract structure or formalization of an axiomatic system used for inferring theorems from axioms by a set of inference rules. Undecidability: being the analog in the theory of computation of the unphysical infinities. Undecidability is having an infinite search space (see Goldbach conjecture, axiom of choice.) If a problem can be reduced to the halting problem (determine it is there an answer or not), it is undecidable. 

Halting Problem & Gödel prospects. Symbolic / algorithmic is not getting there from within its system in the sense of Gödel's incompleteness theorem. The frame of reference carries its own meaning, unprovable. (Maybe an oracle advises.) Tarski's undefinability: arithmetical truth cannot be defined in arithmetic. Never halt: never return an answer to the problem. All finite state automata halt. A deterministic machine with finite memory will either halt or repeat a previous state.

Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine.

Church-Turing Thesis effect for programming languages is that: any program in any programming language can be translated to a program in any other programming language.

Heuristics as decisions that imply logical errors overlook. Out of distribution decisions. The process by which humans use mental shortcuts to arrive at decisions. eg. Monte Carlo tree search is a heuristic search algorithm for some kinds of decision processes (used in AlphaGo.) Metaheuristics are strategies that guide the process (optimization algorithms.) Negation-heuristics: "don't do that" as in logical operations orders.

Psychological essentialism. Makes members of the kind “the thing that they are” without knowing which the underlying essential features are.

Unconscious irrational desires. And to underline the view here that we are not a final product but a temporary baseline vs. an environment. He can but not mandatory aced to the symbolic.

Social constructionism as our reality formed through continuous interactions and negotiations among society's members.

Ontology, asks what exists (virtually,) and epistemology asks how we can know about the existence of such a thing. Internalism and externalism in epistemology, as in the internal and external factors that make an observer. Episteme: is to know the cause of a thing. Ontic: located in physicality.

Epistemic relativism is that we do not know or justifiably believe anything. Rules to justify a belief.

Epistemological skepticism. Differentiated in terms of the areas in which doubts are raised: toward reason, toward the senses, or toward knowledge of “things-in-themselves”.

Justified true belief. Only in the game of mathematics (induction.) We don’t hold any guaranteed true beliefs yet we are claiming to know things, and being understood (well that happens in a zoomed context, a frame of reference, as a part of the game, of the rules we acknowledge sometimes de facto.)

Biomimetics is an innovative design concept that draws inspiration from nature and its elements and processes to solve complex human problems. Bio-inspired computing uses an evolutionary approach.

Ecological psychology as the concept for the organism-environment coupling before the neural modelling that occurs afterwards. Ecological theory in psychology is the bijective of the interaction of people.

The grokking phenomenon the train loss of a neural network decreases much earlier than its test loss, can arise due to a neural network transitioning from lazy training dynamics to a rich, feature learning regime.

Gestalt psychology the meaning of the perception, which is higher. Multi-modal Markov blanket of a complex system. People experience things as unified wholes.

Markov blanket a statistical boundary that separates two sets of states. Markov blankets exists at all scales of the system.

Bayes theorem is a probability of an event, based on prior knowledge of conditions that might be related to the event. Decision making under uncertainty. (See also Solomonoff's induction. Bayesian probability.)

AIXI. Reinforcement learning (RL) agent. It maximizes the expected total rewards received from the environment. (See Hutter)

Folk psychology, folk physics, the meaning in everyday use.

Commonsense ontology, the encapsulation of palpable meaning. Commonsense knowledge.

Phenomenology my conscious view, intentionality.

Homeostatic, allostatic. Internal equilibrium, stability during change.

Lotka - Volterra dynamics of biological systems in which two species interact, one as a predator and the other as prey, through differential equations.

Structural coupling. An agent of a certain complexity is within an environment's actions upon it. A mutual influence.

Active Inference. Free energy principle. Perception, planning, and action in terms of probabilistic inference, minimize surprise. In mind and behavior. Complex systems adaptation - like biological organisms - which constantly strive to minimize prediction errors in order to remain in life-compatible states.

Immortal/mortal computation. Immortal means that the software can be copied to another machine. Immortal means that the hardware is fused with software.

Renormalization. Extraction of a set of meaningful functions that may produce some meaningful results.

Rationalism additional knowledge can be gained simply by thinking.

Empiricism gain knowledge through your senses.

Constructivism learn by interacting, doing, updating the constructed knowledge. Has under, both rationalism and empiricism.

Behaviorism focuses on the idea that all behaviors are learned through interaction with the environment by the influence of habit. And how controlled environment changes affects behavior. Mostly an externalized view and not mental processes.

Structuralism seeks to uncover the underlying structures that govern human systems and behaviors, emphasizing the interrelatedness of elements within a system. eg. reducing mental processes down into their most basic elements.

Functionalism the role of the mental processes. Computational functionalism.

Enactivism. Understands mental faculties to be embedded within neural and somatic activities and to emerge through actions of the organism. Cognition as embodied activity. Opposed to representationalism, computationalism, cognitivism.

Ismless. They belong to some higher abstraction level. They only locally viable. Less "isms" is an aim here. (Therm by EM-AI-AGI 24.11.2023).

Cognitive psychology mental processes, including how people think, perceive, remember and learn. As part of the larger field of cognitive science, this branch of psychology is related to other disciplines such as neuroscience, philosophy, and linguistics.

Cognitive closure. Epistemological gaps closing. By the advancement of AI, we get insights on our own (cognitive processes) until there is no more (or there's nothing of the sort we conceived till now) concept to be devised (by humans.) (Therm by EM-AI-AGI 24.11.2023).

Semantic closure and strange loops (Douglas Hofstadter) the dynamic and self-organizing nature of consciousness, where meaning emerges from the interactions and feedback loops among mental representations and concepts. Higher-level cognitive processes reflect back on themselves (loops) and entities create meaning or significance for each other through their interactions.

Edge of chaos is a transition space between order and disorder. Lorenz attractor and chaos theory. Chaotic systems can be completely deterministic (mathematical equations) and yet still be inherently unpredictable over long periods of time due to increase of entropy/chaos. Also see three body problem.

Dynamical system a function describes the time dependence of a point in an ambient space.

Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems. When systems run for a long time. System can forget its initial state. Means to study the long-term average behavior of complex systems. If the expected value of an activity performed by a group is the same as for an individual carrying out the same action over time. Of or relating to a process in which every sequence or sizable sample is equally representative of the whole. In the same basins.

Ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., that all accessible microstates are equiprobable over a long period of time. The average of a parameter over time in a single system and the average of the same parameter at a single time in a number of similar systems has the same average value.

Non-ergodic, Unobservable, Uuncontrollable, Unreproducible. To different frames of reference. Phase space of a strongly non-ergodic system is separated into mutually inaccessible basins.

Stochastic (random, non deterministic) process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set. No exact values are determined but a probability distribution. To guess at something.

Stochastic control attempts to achieve a desired behavior in spite of the noise. A stochastic process is non-ergodic when its statistics change with time.

Wiener process / Brownian motion  is a real-valued continuous-time stochastic process. Stochastic processes include the Wiener process or Brownian motion process. See Fourier: Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle.

Autowave or self-oscillation the results of self-organization in non-equilibrium thermodynamic systems.

Self-organization some form of overall order arises from local interactions between parts of an initially disordered system.

Stigmergy is a communication method used in decentralized systems in which individuals communicate with each other by changing the surrounding environment. eg. mediation of animal-animal interaction. Swarm intelligence. Traces left within an environment — the result of an action — stimulate the performance of a future action.

Autopoiesis refers to a system capable of producing and maintaining itself by creating its own parts.

Ablation. Investigates the performance of an AI system by removing certain components to understand the contribution of the component to the overall system.

Collective behavior amoebae with a sufficient supply live as unicellular organisms. However, during starvation they crawl together with forming a multicellular organism, which later gives spores that can survive under adverse conditions.

Boltzmann machine unsupervised deep learning model in which every node is connected to every other node.

Helmholtz machine. A neural network with two networks, a bottom-up recognition network that takes the data as input and produces a distribution over hidden variables, and a top-down "generative" network that generates values of the hidden variables and the data itself. The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.

Probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment.

Statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities.

Ising model is a particular example of a thermodynamic system, and it's the model system for understanding phase transitions.

Banach - Tarski paradox & Hausdorff. Many balls from one (see paradoxes of sets.)

Axiom of choice. If we cannot make explicit choices, how do we know that our selection forms a legitimate set?

Consciousness. Not hard if we take the position: a neural black box and a modelling within the environment. From there, more specifically, dealing with the environment while environment is also becoming part of it (from cellular interaction to in womb development and so on.) And the grades of it, within the multi layers, and the whole body too. And finally a self referenced text representation (prone to ambiguity.) From the prism of evolution, isolation within env., new worlds adaptation, stacked networks.

Integrated information theory (IIT) attempts to identify the essential properties of consciousness (axioms) and, from there, infers the properties of physical systems that can account for it (postulates). Based on the postulates, it permits in principle to derive, for any particular system of elements in a state, whether it has consciousness, how much, and which particular experience it is having. (G. Tononi)

Global Neuronal Workspace Theory (GNW) theory emphasizes the role of long-range loops between cortical areas, which are linked with feedforward and feedback connections.

Theory of Mind. The ability to represent the content and state of each other’s minds.

Intentionality. An emergent property of the mind of being in an environment having the acting potentiality (can do representations). Accounts for the opposition of "death drive."

NLP & consciousness engineering. Top-down: reverse engineering: from symbolic representations, abstract, to the capture and map into ways of mimicking how the brain works, underlying cognitive processes, even try to identify the neural regions. Bottom-up: from circuits in neural activity to the basic cognitive processes, to acquisition of the properties since and even before birth, to the emergence of the complex communication and abstract thinking.

Ephysical imagination. A glimpse of the nature of the imagination (as instance of both metaphysical inquiry and epiphenomenalism) its ontological status, and the relationship between the mental and physical aspects of consciousness. The metaphysical exploration adds depth to the inquiry by questioning the fundamental nature of imaginative constructs and their place in the broader fabric of reality. (Therm by EM-AI-AGI 01.12.2023).

Epiphenomenalism is a position on the mind–body problem which holds that subjective mental events are completely dependent for their existence on corresponding physical and biochemical events within the human body, yet themselves have no influence over physical events.

Predictive Processing / Bayesian. Behaviour and cognitive functions as well as their underlying neurophysiology in terms of Bayesian inference processes. System maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.

Dark Room Problem. Predictive Processing theories hold that the mind's core aim is to minimize prediction-error about its experiences. But prediction-error minimization can be 'hacked', by placing oneself in highly predictable environments where nothing happens.

Materialism/Physicalism. This perspective asserts that reality is fundamentally composed of physical matter and energy.

Idealism posits that reality is fundamentally mental or consciousness-based.

Existentialism: subjective nature of reality, individual experience, freedom, and one's own reality.

Dualism suggests that reality is composed of two fundamentally different substances, often mind and matter. Mind-body.

Cartesian Empiricism - explanation can not be conclusive if applying it gives the possibility being open for doubt.

Monism asserts that there is only one fundamental substance or principle underlying all of reality. There are different forms of monism, such as materialistic monism and idealistic monism.

Eastern views (eg. Buddhism): illusion, impermanence of the material world, interconnectedness of all things.

Cognitive Causal World Modeling. In this, is to take world alternatives and to evaluate, mainly in silence. Causal modeling requires the researcher to construct a model to explain the relationships among concepts related to a specific phenomenon.

GTI (General Theory of Information). The relation between structures of reality and information. Information exists in an abstract world of structures that interacts with the physical and mental worlds. (Once we push the formalism further, see Cognitive Closure, Cartesian & Dualism concepts).

Regressive - Incentive (own therm). Paying attention to what does not seem to be important. Not that we do want to, but that we are natively incentivized to do so. We pay attention to what seems to need fixing. We exist to fix things. (an extension to K. Popper).

Cantor set. Topological space that is homeomorphic (self-equivalent, proto-fractal; see dust; Sierpinski, Mandelbrot, Peano, Koch).

Watts - Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering.

LLM, SSM, CNN, Attention, Transformer, Mamba. LLM -> Transformer -> Attention. Structured State Space Models (non-attention) -> Convolution. Mamba -> dynamic SSMs params. enabling content-based reasoning (non-linear, feature extraction). 

Dirac delta function. Useful as an approximation for a tall narrow spike function, namely an impulse. Has the value zero everywhere except at x = 0, where its value is infinitely large and is such that its total integral is 1.

Ramsey theory. How big must some structure be in order to guarantee that a particular property holds.

Gestalt shift experience an image or entity in one way and then in a different way. A famous example of such a shift is the duck/rabbit (Wittgenstein.)

Ethology is the scientific study of animal behavior mostly in their natural env.

Protention (phenomenology, perception, in Husserl) the moment that has yet to be perceived. "Husserl uses the terms protentions and retentions for the intentionalities which anchor me to an environment. They do not run from a central I, but from my perceptual field itself, so to speak, which draws along in its wake its own horizon of retentions, and bites into the future with its protentions..." Maurice Merleau-Ponty.

Unless it happens to change my mind.

See also the research standpoint (*).
Simple nature explorations (**).

Life is an "emergence" but as it appears to us now. But really it was a long ongoing process. Cell is a state of equilibrium. Environment is agent's up-bringer.
Alive is a too "overrated" concept. Using caution when saying "alive agent." It's primary properties just are. The analogy of the water drop, or a bubble. We are physical instances of the environment, more of that, endowed with properties of the Universe's constrains. Agents are endowed at least with the Universes properties. Agents at the minimum is what we describe by observing. Observing is close to a named "consciousness" state. Also this is a long ongoing feature - not an emergence, as it seems to us , in the now. Once something is described, it is part of the culture. One cell (mind) is more powerful for single tasks than more cells (minds) unless there is a strong motive to operate otherwise, in groups (networks).
From that on, from far above, it looks as there is a mechanism of further evolution. Of a system that resides on much, still available, non-entropic, energy forces. Stigmergic forces. Entropy and order battles because of maybe just local properties of a flux state Universe. A locality comprised of some side effects. Eg. between two strange attractors, in a fractal foam of space properties. An ongoing mechanism, that to some extent we can probe through: vibrations, chirality and culminating in forms, abstract-er formalization and ultimately mathematics.
Cybernetics, modelling of the abstract, of mathematics, is pushing its models to a refined capture of our representational spaces (which can't be that alien, as we are instances of the environment.) Mathematics it is both there and also is discovered. It is there because we operate with the concepts to fit representations, and it is discovered because we can also construct representations (poetry also helping.)
From cells to multi-cell systems. To more complexity. The constant outcome, it is still driving us to create  further. Conscience of the biological is emergence of the biological complexity. Gives its senses further representations. With increasing complexity there is less chance of causal nexus. Just some psychological need for it. Human body like any other biological entity. A mixture of chemicals, an array of functions and structures that is able to navigate physical space. An IO system, efficient to stay in an equilibrium. Shape ends up as a concept. Intelligence comes within the complex system that can play with the representations.
We have the actual Universe, the one we can represent, within ourselves. We can't be a square water drop either. On top of that, language formalizes those properties. Language of thought I think it is the first formalization system of our culture. Common concepts united us. Knowledge grew in a doxastic - credal way. Brains grew for some environment purpose (mainly social ability, complexity in solving societal intricacies). Language came, to start convey more detailed situations, to self-express (for whatever reasons). Language as a primary formalization system contained within the brain's intelligence. And grew, and grew in a way that was thus endowed for more abstract thinking (layers not always as a necessity to solve a natural purpose, but as a side effect of growing, growing in capacity, more connections, more layers, more self-expression, more language, more concepts or justifications against senses, meaning, more abstract constructs' justifications.) More because the environment change required it. And we came to change ourselves the environment. And the neural network layers were stacked. A not pre-programmed by the organism action that stacked and gave rise to a multidimensionality of the brain. That would also rise, maybe illusory concepts as self-referential representations, language, "consciousness", "free will". Brain like walnut: wrinkles and internal layers.
A child's mental development is in clear sensible stages. There is a "visible" jump, feels like an emergent thinking movement from one level to the next, that happens in a matter of months, years intervals.
Humans contributed to the language and concepts formalization space over millennia (shared and borrowed concepts through movement, migration.) It exists like a sphere we keep go to, and add. Very rarely can add to it now. Maybe our peak in the energy force, is when we developed the tools that can further add, but we alone can no longer add to the sphere. The sphere, the global network of thoughts accessible by the way of systems like LLMs nowadays (unite, stitch some ends, yes, but not in a sentient-ial way capable yet).
Logic and language are the same. Free will is a matter of resolution, depending in which space we are operating with which representations. And much all things are a matter of resolution and framing.
We may be able to formalize a desired reality or concept with the models of mathematics, computational way, with the first formalization, but it can never touch it all, exactly. The universe is computational but not in the totality of it.
Mind is not the brain. Mind is a "whole": senses, body, universes properties, and abstract representations.
Concepts looses details. Concepts overlap. Some yield good outcomes, some are dead ends, cause for discontent, auto-regressive spaces, impossibility of advancement.
No sharp image needed, and a vague idea (like in the Explorations) is most of the time sufficient to advancement.
Here on these assumptions we hope to bring some clarity, disentanglement, demystification and then repack, frame better, all in a digital sentient (multi)instance(s). It is the journey that counts. And we acknowledge auto-regressiveness of a strong formalization (in any language space): unless not strongly framed (specific glasses, but do not look too far!) we should also let things a bit open, we also seek to pose good questions, we are not definitive and do not encourage strict vector spaces or beautiful looking webs for the sake of acceptance, roundness, attachment to a certain system (*).

L.Wittgenstein: he tried and beautifully compressed it all. Realized that it is not all to it there is, things are open and that "meaning is use." Explore further our Wittgenstein @ EM website.

B. Spinoza: "men are conscious of their desire and unaware of the causes by which [their desires] are determined."

K. Popper: Demarcation problems. Not sufficient to shape proofs intelligibly or to discover them efficiently.

J. Schmidthuber: envisioned a digital pal at an early age. And an entire framework from that on.


- Exactly linked articles. At this stage not that relevant, due to self-expression as much as possible character of the writing.

Mathematical formalism

The strong formalization of the ML terms does account for their survival and usefulness. There were many cases when AI researchers took a look back and use what was invented some many years back. Thus, here is not much to be discarded even though some terms are "old". On a whole, all these mathematically formalized ML algorithms may serve a purpose in a greater schema of different synthetic and hybrid kinds (of AGI, ASI, MSI (Machine Super Intelligence), etc.)

Neural Nets & Structs

Latent space. Multi-dimensional space that encodes a meaningful internal representation of externally observed events. Is chosen to be lower than the dimensionality of the feature space from which the data points are drawn, making the construction of a latent space an example of dimensionality reduction / data compression. Tools eg. Word2Vec.

SCM/SEM. Structural causal models also known as (nonparametric) structural equation models (SEMs), are widely used for causal modeling purposes. In particular, acyclic SCMs, also known as recursive SEMs, form a subclass of SCMs that generalize causal Bayesian networks to allow for latent confounders. Confounding Variable: eg. coffee drinkers may smoke more cigarettes than non-coffee drinkers.

SGD. (Stochastic) Gradient Descent. (Recursively) through fog looking at the steepness of the hill at their current position, then proceeding in the direction with the steepest descent. (See Cauchy's Gradient descent, Deep Networks.)

Backpropagation. Calculate the necessary parameter adjustments, to gradually minimize error.

Reinforcement Learning differs from supervised learning in not needing labelled input/output pairs to be presented. A balance between exploration (of uncharted territory) and exploitation (of current knowledge).

MDP (Markov decision process) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. (see Heuristics/MCTS.)

Neuroevolution describes the application of evolutionary and/or genetic algorithms to training either the structure and/or weights of neural networks as gradient-free (forward pass, no gradient.)

LSTM. Long Short-Term Memory cells. Allows them to capture and remember information for longer sequences, making them well-suited for tasks involving sequential data, such as natural language processing, speech recognition, and time-series prediction.

RNN. Recurrent Neural Network.

FFNN. Feed-Forward Neural Network.

CNN. Convolutional Neural Network.

TGCN. Tensor-graph convolutional networks, which identify highly non-linear associations in data, combine multiple relations, and scale gracefully, while remaining robust and performant.

LNN. Liquid Neural Network. Networks that lack stable connections and static elements as ‘liquid’ brains, a category that includes ant and termite colonies, immune systems and some microbiomes and slime moulds.

PINN. Physics-informed neural networks.

(P)LNN. (Probabilistic) Logical Neural Networks.

ResNet (Residual Network) is a deep learning model used for computer vision applications. It is a Convolutional Neural Network (CNN) architecture designed to support hundreds or thousands of convolutional layers.

PCN (Predictive Coding Networks) classification / regression networks with key advantages compared to neural networks and still approximating the backpropagation algorithm.

Mechanistic interpretability. Try to grasp an analogy from a working systems process, to the neural networks mechanisms. A try to reverse engineering deep neural network's ways of working.

MCTS. Monte Carlo Tree Search - the exploitation of deep variants after moves with high average win rate and the exploration of moves with few simulations. A balance between "explore there" rather than "exploit the node" with the highest win rate.

MARL. Multi-agent reinforcement learning.

Manifold learning is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high.

Hebbian learning encapsulates the idea that “cells that fire together wire together.”

Cluster analysis is a statistical method for processing data. It works by organizing items into groups, or clusters, based on how closely associated they are.

Generative AI

RAG takes an input and retrieves a set of relevant/supporting documents given a source.

Hardware implementations & frameworks.

GPU. Houses thousands of cores running simultaneously. It has an instruction set optimized for massive amounts of Floating Point System math instructions. Multitasking, points, vectors, matrices, geometry, parallelism. CUDA/OpenCL/... APIs to the GPUs that allows developers to access the raw computing power of GPUs to process data faster than with traditional CPUs.

cuDNN. NVIDIA's GPU-accelerated deep learning framework of CUDA. Provides highly tuned implementations for standard routines such as forward and backward convolution, attention, matmul, pooling, and normalization.

Math for description of the state of a physical system.

SDE/ODE/PDE. Stochastic differential equations used to model the evolution of random diffusion processes across time. ODE is a differential equation dependent on only a single independent variable. PDES vs. ODES. ODEs involve derivatives in only one variable, whereas PDEs involve derivatives in multiple variables.

MFPM/MIPS. Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms (a randomized algorithm whose output may be incorrect on a small probability) for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation.(see MCTS).

IPS (interacting particle system). A stochastic process on some configuration space given by a site space, a countably-infinite-order graph and a local state space, a compact metric space. 

Banach space (Hilbert space, Topological space). A kind of generalization of real n-dimensional Euclidean space of vectors. Hilbert space is a Banach space whose norm is determined by an inner product.

Wasserstein space (Metrical spaces - have algebras of sets within with a measure function (eg. Borel measures)). A metric space, of a metric structure (the Wasserstein distance) in the space of probability measures P(X) on a space X.

Jacobian matrix can also be thought of as describing the amount of "stretching", "rotating" or "transforming" that the function imposes locally near that point. Used when making a change of variables when evaluating a multiple integral of a function over a region within its domain. Used to determine the stability of equilibria for systems of differential equations by approximating behavior near an equilibrium point.

Hessian. It describes the local curvature of a function of many variables.

Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space.

Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom.

Riemannian metric. An inner product on the tangent space at each point that varies smoothly from point to point.

Dyadic product takes in two vectors and returns a second order tensor called a dyadic.

Lorentz group expresses the fundamental symmetry of space and time of all known fundamental laws of nature.

Lorenz attractor is a set of chaotic solutions of the Lorenz system (ODE.) Several different initial chaotic conditions evolve in phase space in a way that never repeats, so all chaos is unpredictable.

Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field.

Weierstrass function Chaos everywhere. Like a Wiener process: continuous everywhere but differentiable nowhere. Differentiabiliy not guaranteed under continuity constrains.

Slater determinant is an expression that describes the wave function of a multi-fermionic system. It satisfies anti-symmetry requirements, and consequently the Pauli principle (Quantum.)

Mahalanobis distance is a measure of the distance between a point and a distribution.

Abstract math

Group theory. A group is central to abstract algebra: the study of algebraic structures, such as rings, fields, and vector spaces, that are groups endowed with additional operations and axioms.

Category theory is a branch of mathematics that deals with the study of abstract structures and relationships between them. It provides a framework for understanding mathematical structures by focusing on the mappings between objects rather than the objects themselves. Deep learning architectures can be viewed as composed of various layers and operations, each with its own properties and transformations. Category theory provides a language for describing and analyzing these transformations in a unified way.

Cohomology associate smooth manifold with an algebra. A general term for a sequence of abelian groups, usually one associated with a topological space. Homology itself was developed as a way to analyse and classify manifolds according to their cycles – closed loops (or more generally submanifolds) that can be drawn on a given n dimensional manifold but not continuously deformed into each other.

Path integrals. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

Tensor. a multilinear mapping over a set of domain vector spaces to a range vector space. Multilinear relationship between sets of algebraic objects. In ML eg. Bayesian Clustered Tensor Factorization to model relational concepts. In CNNs tensor methods organize neural network weights in a "data tensor", analyze and reduce the number of neural network weights and also use of 4D kernel tensors.
General relativity is formulated completely in the language of tensors.

Representation theory is a branch of mathematics that studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. A representation makes an abstract algebraic object more concrete by describing its elements by matrices and their algebraic operations.

Langlands program a kind of grand unified theory of mathematics, is a web of far-reaching and consequential conjectures about connections between number theory and geometry. Langlands correspondence is a duality between objects of two different kinds (like particle-wave duality in quantum mechanics). The objects on the two sides depend on two different groups, called Langlands dual groups.

Galois theory provides a connection between field theory and group theory. This connection allows reducing certain problems in field theory to group theory.

Diaconescu's theorem, states that the full axiom of choice is sufficient to derive the law of the excluded middle or restricted forms of it.

Continuum hypothesis reveals a major limitation of current axioms. Any subset of the real numbers is finite, is countably infinite, or has the same cardinality as the real numbers.

P vs NP problem. Every problem whose solution can be quickly verified can also be quickly solved? (nondeterministic polynomial time, computational complexity theory, PSPACE...)

Sheaf theory is a mathematical framework for studying local-to-global properties of mathematical objects. Deals with the idea of "local data" and how it can be consistently patched together to form "global data." Applications in algebraic geometry, topology, and differential geometry, and forms the basis in modern mathematics. Topoi can be seen as categories of sheaves on (generalized) spaces. (see Grothendieck.)

Topos theory explores the concept of "topoi" (plural of topos), which are mathematical structures that generalize the notion of a category. A topos can be thought of as a highly flexible framework for doing mathematics, encompassing ideas from logic, set theory, and topology. A topos is a category that has both a spatial and logical structure, allowing for the expression of logical propositions and deductions within it. Topos theory, extending beyond sheaf theory, provides a more holistic and abstract framework. 

© 2023-2024 Essentia Mundi. All rights reserved.

AI Website Generator