AI to AGI Research Lab.
in Sibiu, by C. Stefan

The portal of the A.(G.)I. Research Laboratory at Essentia Mundi.
"An Agent would be of use only as part of a certain system. If I surround it with the right kind of adjacent cases it becomes better aligned and naturally belonging in that particular environment."

Latest News and Articles

Physical constrains to psychological needs. Science as a need.
C. Stefan 13-Dec-2024

Following the view to consciousness, I ponder to the idea of whether the starting point of it, follows a physical constraint milieu. Read more...

A view to Consciousness.
C. Stefan 01-Ian-2024

No causal nexus, and a psychological need. And that is all about to it. Read more...

Ideological Dreaming Agents. Humans and machines.
C. Stefan 14-Dec-2023

Not that there is an ontological need, a world representations, but this is in the kind of an ideological one. Read more...

An AGI agent with intents, navigating a culture
C. Stefan 01-Dec-2023

Intents and objectives that allow the basic navigation in an environment arise as opposition of death drive. Read more...

What is Q-learning
C. Stefan 23-Nov-2023

The "Q" in Q-learning stands for quality, and it represents the quality of an action in a given state. Read more...

AGI with LLM as layer and environment that has "weak" agency
C. Stefan 20-Nov-2023

I would go a step beyond, and add that the environment should also have basic cognitive characteristics. Read more...

Articles

Our articles around AI and AGI research. On-the-fly update follows.

Zooms, infinities, scales, refinements

Zooms, infinities, refinements, perspectives, scales, degrees of, compression.

The new generative layer

The CPU is a generative layer to OS. OS a generative layer to Apps. 

Climbing ladders

Finding a way to navigate from bottom to top trough ladders.

Graduality and emergence

A child makes gradual but conspicuous cognitive elevations. Clear jumps to the "higher" levels. Launching the idea that emergence's x-level of a complex systems can't completely explain the (x-n) levels by itself. No levels causal nexus.

The Gaia complex organism

Current empirical methods to address climate issues are treating the symptoms. How the sub-systems regulate themselves. What are we in the Gaia complex system?

The world view

I am this instance, thus I see what that instance can see. Is there more to it? Why we compress and why there is formalization need? Noise and order.

Towards a de-anthropomorphization view

Exercising viewing the entities not as themselves. See beyond the blankets. I do not think. The collection of cells emerged something we see as such.

The cognitively endowed environment

The environment should also have some priors in the space of basic cognitive abilities.

What kind of attractors are out there

From chirality to stigmergy. To the attempt to controlling chaos. To default emergences.

Twins, learning, formalization & AI

Always wonder about entities having roughly the same brain structure, like the twins.

Is there a causal nexus certainty?

Spuriousness, inference, correlation and the causal nexus. Uncertainty.

Minimization of surprise is not beneficial

Working on negative outcomes leads to system decay. Need to maximize surprise.

The better language engineering approach

A gradual bottom up approach, a "game" with the agent and (or) environment.

A transformation of the Big Other

As we march towards a "cognitive closure" there will be a transformation of the divine observer.

The hard problem of NLP encapsulation

Language as a first formalization method about our worlds / mental representations.

A survey of emergences

A survey about complex self-organizing systems, emergent patterns. See things up from the very low perspective, single cell, multi-cells, networks, specialized networks.

How would AGI look like

AGI in the next years, some say. But what it would look like when we see it? Probably evolving along with us, maybe hardly recognizable from the perspective of the now, more as natural integration with our culture, visible and arguable from a historical perspective.

Stochastic processes with Fourier transform

How to probe at the unknown, to allow discovery beyond the probability distribution.

An action from a vertical distance

That feeling that you surprise yourself, with just what you said. We store the compression of the representation in the lower levels of the network (train it,) we talk from the lower levels (decompress.) If you want to think about it consciously (there is a trace of it,) but very often either there is a delay in retrieving it, sometimes it's completely or temporarily forgotten.

A Thinking exercise: that we are not Thinking

In the folk psychology sense of the word. Do we really think? When we think about the cells, that they collectively assembled to form more complex organisms, were they intended to create an organ that was going to handle what? What was the (original) purpose of the brain?

Embracing Wittgenstein's Nonsense: The Creative Exploration in Philosophical Inquiry and its Implications for AI Advancement through LLMs

One can not make the immediate parallel of the Wittgenstein's talking nonsense with the current LLMs characteristics. But "confabulation" should be embraced.

Our A.(G.)I. Experiment(s)

The Snowball layers

The layers of the artificial entity.

The O-Model, a first proposal

The bootstrapping world of the entity. Introducing the first theoretical framework of the experimental O-Model: a basic world Ontology of an AGI Agent with Occam's Razor and Wittgenstein's Tractatus principles.

The Environment

The medium of the Agent.

The instance of our AGI

Implement, start, refine.

References

Occam's Razor. The minimal set of properties with which you can infer a new concept. (The Glove analogy by EM-AI-AGI 01.12.2023). The simplest possible theoretical explanation for existing data (heauristics.) Future data often support more complex theories than do existing data (pick a frame of reference.)

Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

Kolmogorov probability. Nothing is not the only event with probability 0.

Markov chain. What happens next depends only on the state of affairs now.

Falsifiability. A theory or hypothesis is falsifiable (or refutable) if it can be logically contradicted by an empirical test. (see verifiability).

Philosophical understanding is non-theoretical, is a process of gaining clarity. Dissolving confusion, not constructing new theories.

It is our picture of the world that is constitutive for our convictions and our language.

In linguistics, surveyable representation can be used to gain an understanding of a population's language and communication patterns. Paleo-linguistics comes as subdomain.

Learning is influenced by both external and internal factors.

Deontology focuses on the action itself, regardless of its consequences.

We try fit mathematical models to our senses. And sometimes the other way around.

Turing machines and the universal computer. On Computable Numbers, with an Application to the Entscheidungsproblem.

Doxastic/Credal. As in the Bible was first Doxastic then Credal.

Alignment. If AI rises how to make it in our world. Grow hare and with Empathy.

Causal nexus. There is usually no causal nexus, only the psychological need for a causal.

Compositionality. The meaning of a phrase is determined by the meanings of its parts and their syntax.

Chirality. Not really mirror-symmetric.

Embodied cognition. Acting with a physical body on an environment in which that body is immersed.

Reasoning. Deductive reasoning (propositional, First Order logic) and inductive reasoning (data based, Bayesian). Abductive reasoning is a form of reasoning that involves forming the best explanation for a set of observations or evidence. Modal logics are used for reasoning about necessity and possibility. Temporal logics are employed for reasoning about time and temporal relationships. This includes reasoning about sequences of events, causality, and the temporal order of actions. Probability logics and Bayesian reasoning are used to handle uncertainty and make decisions based on probabilities. Fuzzy logic also falls into this category, as it allows for reasoning with degrees of truth. Epistemic logics are used for reasoning about knowledge and belief. This is important in fields such as artificial intelligence, game theory, and philosophy. Deontic logics are employed for reasoning about obligations, permissions, and prohibitions. This is relevant in ethics, legal reasoning, and normative systems. Non-classical logics, such as intuitionistic logic and paraconsistent logic, challenge classical assumptions and provide alternative approaches to reasoning. Intuitionistic logic, for example, is associated with constructive reasoning. Game-theoretic logics, such as dynamic epistemic logic, are used to model and reason about strategic interactions and information exchange in multi-agent systems.
Analogical reasoning. What is known about one situation can be used to infer new information about the other.

Postulates (axioms) are true statements without need of proof and are the basic structure from which lemmas and theorems are derived (that need proof).

Pragmatism. An ideology or proposition is true if it works satisfactorily, and it is agreed upon.

Meaning is use. The idea that words (and more,) have not a fixed meaning attributed by some high order or entity. It is just as it is, the meaning of a word is its use in the language (more deeply, also in a culture.) A bottom-up symbolism. A game of acquirement of meaning.

Investigations (PI). Use of language in various contexts, language games, and the fluid, context-dependent nature of meaning.

Tractatus (TLP). Statements that do not correspond to possible states of affairs are nonsensical and should be excluded from meaningful discourse.

TLP->PI. What is tragic in TLP vs. PI? Refute of old views. But isn't it an evolution (be it cognitive) behind? That the tree only bends, it's not tragic, but that it breaks (everyone insists of what tragic event that is, and so it lands in the history books). But I add always the perspective: from A to B there are grades of grey. For me it's more fair to say TLP->PI (evolution).

Abelian Group. The group operation is commutative.

Demarcation Problem. Not a problem if one acknowledges grey systems. An agent with Tractatus, as base ontology, would entail a better boundary though. (also related to Doxastic / Credal positions through time/culture, GTI)

Multi-armed Bandit Problem. related to Exploration–exploitation tradeoff; Bayesian method.  Not known allocation of "luck," only through tries/time. eg. Which slot machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine.

Ceteris Paribus. A variable, and hold the other things constant. Isolate the effect. See impact.

Fourier transform. But in higher dimensions, eg. the Fourier transform on a non-abelian group takes values as Hilbert space operators. (see Pontryagin duality).

Laplace transform. Integral transform which takes a function and maps it from the time domain into the frequency domain. Transforms ordinary differential equations into algebraic equations and convolution into multiplication. Cochlear inputs (from the ear) which first start as space vs time displacements on the ear drum, end up as frequency vs amplitude plots on the primary audio cortex.

Principle of least action (Hamilton's, Maupertuis's). The path that has the least change from nearby paths. The action is the difference between kinetic energy and potential energy integrated over time. This difference is called the Lagrangian.

PDES vs. ODES. ODEs involve derivatives in only one variable, whereas PDEs involve derivatives in multiple variables.

Additive number theory. Regularly structured superset property as seen from small sumsets (EM-AI 12.12.23). A set with a small sumset must be contained by a larger set whose elements are spaced in a highly regular pattern. (see Groups, Marton).

Teleonomy. Study of ends or purposes. Body's functions has the purpose in assuring the survival of the organism.

Adiabatic. A process without transfer of heat to or from a system, adiabatically isolated (in Thermodynamics).

Assembly theory. Search for biosignatures, life. Only living systems can produce complex molecules that could not form randomly in any abundance.

Onicescu’s Informational Energy (vs Shannon’s entropy.) A positive quantity that measures the amount of uncertainty of a random variable: the informational energy is strictly convex and increases when randomness decreases (vs entropy.)




Meta-learning for compositionality (MLC) approach for guiding training through a dynamic stream of compositional tasks. (Brenden M. Lake & Marco Baroni.)

Self-Assembling Artificial Neural Networks through Neural Developmental Programs - we take initial steps toward neural networks that grow through a developmental process that mirrors key properties of embryonic development in biological organisms. (Elias Najarro, Shyam Sudhakaran, Sebastian Risi.)
Also: Bio-inspired approaches to adaptive agents. (Joachim Winther Pedersen)

Moral foundations theory (MFT) is a psychological assessment tool that decomposes human moral reasoning into five factors, including care/harm, liberty/oppression, and sanctity/degradation.
(Marwa Abdulhai, Gregory Serapio-Garcia et al.)

Unentscheidbarkeit / Undecidability / Incompletness / Undefinability (of truth) is a problem at the threshold of definability of a formal thought against the analog. The level of the formalism still does acknowledge the dx/dt approximation. Formal system is an abstract structure or formalization of an axiomatic system used for inferring theorems from axioms by a set of inference rules. Undecidability. being the analog in the theory of computation of the unphysical infinities. (see Goldbach conjecture, axiom of choice.)

Halting Problem & Gödel prospects. Symbolic / algorithmic is not getting there from within its system in the sense of Gödel's incompleteness theorem. The frame of reference carries its own meaning, unprovable. (Maybe an oracle advises.) Tarski's undefinability: arithmetical truth cannot be defined in arithmetic. 

Heuristics as decisions that imply logical errors overlook. Out of distribution decisions.

Psychological essentialism. Makes members of the kind “the thing that they are” without knowing which the underlying essential features are.

Unconscious irrational desires. And to underline the view here that we are not a final product but a temporary baseline vs. an environment. He can but not mandatory aced to the symbolic.

Social constructionism as our reality formed through continuous interactions and negotiations among society's members.

Ontology, asks what exists, and epistemology asks how we can know about the existence of such a thing. Internalism and externalism in epistemology, as in the internal and external factors that make an observer.

Epistemic relativism is that we do not know or justifiably believe anything. Rules to justify a belief.

Epistemological skepticism. Differentiated in terms of the areas in which doubts are raised: toward reason, toward the senses, or toward knowledge of “things-in-themselves”.

Justified true belief. Only in the game of mathematics (induction.) We don’t hold any guaranteed true beliefs yet we are claiming to know things, and being understood (well that happens in a zoomed context, a frame of reference, as a part of the game, of the rules we acknowledge sometimes de facto.)

Biomimetics is an innovative design concept that draws inspiration from nature and its elements and processes to solve complex human problems.

Ecological psychology as the concept for the organism-environment coupling before the neural modelling that occurs afterwards. Ecological theory in psychology is the bijective of the interaction of people.

The grokking phenomenon the train loss of a neural network decreases much earlier than its test loss, can arise due to a neural network transitioning from lazy training dynamics to a rich, feature learning regime.

Gestalt psychology the meaning of the perception, which is higher. Multi-modal Markov blanket of a complex system. People experience things as unified wholes.

Markov blanket a statistical boundary that separates two sets of states. Markov blankets exists at all scales of the system.

Bayes theorem is a probability of an event, based on prior knowledge of conditions that might be related to the event. Decision making under uncertainty.

AIXI. Reinforcement learning (RL) agent. It maximizes the expected total rewards received from the environment.

Folk psychology, folk physics, the meaning in everyday use. 

Commonsense ontology, the encapsulation of palpable meaning. Commonsense knowledge.

Phenomenology my conscious view, intentionality.

Homeostatic, allostatic. Internal equilibrium, stability during change.

Lotka–Volterra dynamics of biological systems in which two species interact, one as a predator and the other as prey, through differential equations.

Active Inference. Free energy principle. Perception, planning, and action in terms of probabilistic inference, minimize surprise. In mind and behavior. Complex systems.

Renormalization. Extraction of a set of meaningful functions that may produce some meaningful results.

Rationalism additional knowledge can be gained simply by thinking.

Empiricism gain knowledge through your senses.

Constructivism learn by interacting, doing, updating the constructed knowledge. Has under, both rationalism and empiricism.

Behaviorism focuses on the idea that all behaviors are learned through interaction with the environment by the influence of habit. And how controlled environment changes affects behavior. Mostly an externalized view and not mental processes.

Structuralism reducing mental processes down into their most basic elements.

Functionalism the role of the mental processes.

Enactivism. Understands mental faculties to be embedded within neural and somatic activities and to emerge through actions of the organism. Cognition as embodied activity. Opposed to representationalism, computationalism, cognitivism.

Ismless. They belong to some higher abstraction level. They only locally viable. Less "isms" is an aim here. (Therm by EM-AI-AGI 24.11.2023).

Cognitive psychology mental processes, including how people think, perceive, remember and learn. As part of the larger field of cognitive science, this branch of psychology is related to other disciplines such as neuroscience, philosophy, and linguistics.

Cognitive closure. Epistemological gaps closing. By the advancement of AI, we get insights on our own (cognitive processes) until there is no more (or there's nothing of the sort we conceived till now) concept to be devised (by humans.) (Therm by EM-AI-AGI 24.11.2023).

Edge of chaos is a transition space between order and disorder. Lorenz attractor and chaos theory. Chaotic systems can be completely deterministic (mathematical equations) and yet still be inherently unpredictable over long periods of time due to increase of entropy/chaos. Also see three body problem.

Dynamical system a function describes the time dependence of a point in an ambient space.

Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems. When systems run for a long time. System can forget its initial state. Means to study the long-term average behavior of complex systems. If the expected value of an activity performed by a group is the same as for an individual carrying out the same action over time. Of or relating to a process in which every sequence or sizable sample is equally representative of the whole. In the same basins.

Non-ergodic, Unobservable, Uuncontrollable, Unreproducible. To different frames of reference. Phase space of a strongly non-ergodic system is separated into mutually inaccessible basins.

Stochastic (random, non deterministic) process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set. No exact values are determined but a probability distribution. To guess at something.

Stochastic control attempts to achieve a desired behavior in spite of the noise. A stochastic process is non-ergodic when its statistics change with time.

Brownian motion or Wiener process is a real-valued continuous-time stochastic process. Stochastic processes include the Wiener process or Brownian motion process. See Fourier: Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle.

Autowave or self-oscillation the results of self-organization in non-equilibrium thermodynamic systems.

Self-organization some form of overall order arises from local interactions between parts of an initially disordered system.

Autopoiesis refers to a system capable of producing and maintaining itself by creating its own parts.

Collective behavior amoebae with a sufficient supply live as unicellular organisms. However, during starvation they crawl together with forming a multicellular organism, which later gives spores that can survive under adverse conditions.

Boltzmann machine unsupervised deep learning model in which every node is connected to every other node.

Probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment.

Statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities.

Ising model is a particular example of a thermodynamic system, and it's the model system for understanding phase transitions.

Banach–Tarski paradox & Hausdorff. Many balls from one (see paradoxes of sets.)

Axim of choice. If we cannot make explicit choices, how do we know that our selection forms a legitimate set?

Consciousness. Not hard if we take the position: a neural black box and a modelling within the environment. From there, more specifically, dealing with the environment while environment is also becoming part of it (from cellular interaction to in womb development and so on.) And the grades of it, within the multi layers, and the whole body too. And finally a self referenced text representation (prone to ambiguity.)

Intentionality. An emergent property of the mind of being in an environment having the acting potentiality (can do representations). Accounts for the opposition of "death drive."

NLP & consciousness engineering. Top-down: reverse engineering: from symbolic representations, abstract, to the capture and map into ways of mimicking how the brain works, underlying cognitive processes, even try to identify the neural regions. Bottom-up: from circuits in neural activity to the basic cognitive processes, to acquisition of the properties since and even before birth, to the emergence of the complex communication and abstract thinking.

Ephysical imagination. A glimpse of the nature of the imagination (as instance of both metaphysical inquiry and epiphenomenalism) its ontological status, and the relationship between the mental and physical aspects of consciousness. The metaphysical exploration adds depth to the inquiry by questioning the fundamental nature of imaginative constructs and their place in the broader fabric of reality. (Therm by EM-AI-AGI 01.12.2023).

Predictive Processing. Behaviour and cognitive functions as well as their underlying neurophysiology in terms of Bayesian inference processes.

Dark Room Problem. Predictive Processing theories hold that the mind's core aim is to minimize prediction-error about its experiences. But prediction-error minimization can be 'hacked', by placing oneself in highly predictable environments where nothing happens.

Materialism/Physicalism. This perspective asserts that reality is fundamentally composed of physical matter and energy.

Idealism posits that reality is fundamentally mental or consciousness-based.

Existentialism: subjective nature of reality, individual experience, freedom, and one's own reality.

Dualism suggests that reality is composed of two fundamentally different substances, often mind and matter. Mind-body.

Cartesian Empiricism - explanation can not be conclusive if applying it gives the possibility being open for doubt.

Monism asserts that there is only one fundamental substance or principle underlying all of reality. There are different forms of monism, such as materialistic monism and idealistic monism.

Eastern views (eg. Buddhism): illusion, impermanence of the material world, interconnectedness of all things.

Cognitive Causal World Modeling. In this, is to take world alternatives and to evaluate, mainly in silence.

GTI (General Theory of Information). The relation between structures of reality and information. Information exists in an abstract world of structures that interacts with the physical and mental worlds. (Once we push the formalism further, see Cognitive Closure, Cartesian & Dualism concepts).

Regressive-Incentive (own therm). Paying attention to what does not seem to be important. Not that we do want to, but that we are natively incentivized to do so. We pay attention to what seems to need fixing. We exist to fix things. (an extension to K. Popper).

Cantor set. Topological space that is homeomorphic (self-equivalent, proto-fractal; see dust; Sierpinski, Mandelbrot, Peano, Koch).

Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering.

Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine.

LLM, SSM, CNN, Attention, Transformer, Mamba. LLM -> Transformer -> Attention. Structured State Space Models (non-attention) -> Convolution. Mamba -> dynamic SSMs params. enabling content-based reasoning (non-linear, feature extraction). 

Dirac delta function. Useful as an approximation for a tall narrow spike function, namely an impulse. Has the value zero everywhere except at x = 0, where its value is infinitely large and is such that its total integral is 1.

Ramsey theory. How big must some structure be in order to guarantee that a particular property holds.


Unless it happens to change my mind.

Life is an emergence. Cell is a state of equilibrium. Environment is agent's up-bringer.
Alive is a too overrated concept. Using caution when saying "alive agent." It's primary properties just are.
The analogy of the water drop, or a bubble. We are physical instances of the environment, more of that, endowed with properties of the Universe's constrains. Agents are endowed at least with the Universes properties. Agents at the minimum is what we describe by observing. Once described is part of the culture. One cell (mind) is more powerful for single tasks than more cells (minds) unless there is a strong motive to operate otherwise, in groups (networks).
From that on, it is a further evolution, of a system that resides on much still available, non-entropic, energy force. Stigmergic forces. An entropy and order battle because of maybe just local properties of a flux state Universe. A locality comprised of some side effects. Eg. between two strange attractors, in a fractal foam of space properties.
Cybernetics, modelling of the abstract, of mathematics, is pushing its models to a refined capture of our representational spaces. Mathematics it is both there and also is discovered. It is there because we operate with the concepts to fit representations, and it is discovered because we construct representations.
From cells to multi-cell systems. To more complexity. The constant outcome, it is still driving us to create  further. Conscience of the biological is emergence of the biological complexity. Gives its senses further representations. With increasing complexity there is less chance of causal nexus. Just some psychological need for it. Human body like any other biological entity. A mixture of chemicals, an array of functions and structures that is able to navigate physical space. An IO system, efficient to stay in an equilibrium. Shape ends up as a concept. Intelligence comes within the complex system that can play with the representations.
We have the actual Universe, the one we can represent, within ourselves. We can't be a square water drop either. On top of that, language formalizes those properties. Language of thought I think it is the first formalization system of our culture. Common concepts united us. Knowledge grew in a doxastic - credal way. Brains grew for some environment purpose (mainly social ability, complexity in solving societal intricacies). Language came, to start convey more detailed situations, to self-express (for whatever reasons). Language as a primary formalization system contained within the brain's intelligence. And grew, and grew in a way that was thus endowed for more abstract thinking (not always as a necessity to solve a natural purpose, but as a side effect of growing, growing in capacity, more connections, more layers, more self-expression, more language, more concepts or justifications against senses, meaning, more abstract constructs's justifications.) Brain like walnut: wrinkles and internal layers. Child mental development is in clear sensible stages. There is an emergent thinking movement from one level to the next. Humans contributed to the language and concepts formalization space over millennia. It exists like a sphere we keep go to and very rarely can add to it now. Maybe our peak in the energy force, is when we developed the tools that can further add, but we alone can no longer add to the sphere. The sphere, the global network of thoughts accessible by the way of systems like LLMs.
Logic and language are the same. Free will is a matter of resolution, depending in which space we are operating with which representations.
We may be able to formalize with the models of mathematics, computational way, the first formalization, but it can never touch it. The Universe is computational but not totally.
Mind is not the brain. Mind is whole senses, body, Universes properties, and representations. Concepts looses details.

Baruch Spinoza: "men are conscious of their desire and unaware of the causes by which [their desires] are determined."

Jürgen Schmidthuber.

- Articles and people

Mathematical formalism

The strong formalization of the ML terms does account for their survival and usefulness. There were many cases when AI researchers took a look back and use what was invented some many years back. Thus, here is not much to be discarded even though some terms are "old". On a whole, all these mathematically formalized ML algorithms may serve a purpose in a greater schema of different synthetic and hybrid kinds (of AGI, ASI, MSI (Machine Super Intelligence), etc.)

Neural Nets

Latent space. Multi-dimensional space that encodes a meaningful internal representation of externally observed events. Is chosen to be lower than the dimensionality of the feature space from which the data points are drawn, making the construction of a latent space an example of dimensionality reduction / data compression. Tools eg. Word2Vec.

SGD. (Stochastic) Gradient Descent. (Recursively) through fog looking at the steepness of the hill at their current position, then proceeding in the direction with the steepest descent.

Backpropagation. Calculate the necessary parameter adjustments, to gradually minimize error.

Reinforcement Learning differs from supervised learning in not needing labelled input/output pairs to be presented. A balance between exploration (of uncharted territory) and exploitation (of current knowledge).

MDP (Markov decision process) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker.

Neuroevolution describes the application of evolutionary and/or genetic algorithms to training either the structure and/or weights of neural networks as gradient-free (forward pass, no gradient.)

LSTM. Long Short-Term Memory cells. Allows them to capture and remember information for longer sequences, making them well-suited for tasks involving sequential data, such as natural language processing, speech recognition, and time-series prediction.

RNN. Recurrent Neural Network.

FFNN. Feed-Forward Neural Network.

CNN. Convolutional Neural Network.

LNN. Liquid Neural Network. Networks that lack stable connections and static elements as ‘liquid’ brains, a category that includes ant and termite colonies, immune systems and some microbiomes and slime moulds.

PINN. Physics-informed neural networks.

ResNet (Residual Network) is a deep learning model used for computer vision applications. It is a Convolutional Neural Network (CNN) architecture designed to support hundreds or thousands of convolutional layers.

Math for description of the state of a physical system.

SDE. Stochastic differential equations used to model the evolution of random diffusion processes across time.

MFPM/MIPS. Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms (a randomized algorithm whose output may be incorrect on a small probability) for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation.

IPS (interacting particle system). A stochastic process on some configuration space given by a site space, a countably-infinite-order graph and a local state space, a compact metric space. 

Banach space (Hilbert space, Topological space). A kind of generalization of real n-dimensional Euclidean space of vectors. Hilbert space is a Banach space whose norm is determined by an inner product.

Wasserstein space (Metrical spaces - have algebras of sets within with a measure function (eg. Borel measures)). A metric space, of a metric structure (the Wasserstein distance) in the space of probability measures P(X) on a space X.

Abstract math

Cohomology associate smooth manifold with an algebra. A general term for a sequence of abelian groups, usually one associated with a topological space. Homology itself was developed as a way to analyse and classify manifolds according to their cycles – closed loops (or more generally submanifolds) that can be drawn on a given n dimensional manifold but not continuously deformed into each other. 

(c) 2023 Essentia Mundi. All rights reserved.

Mobirise.com

Built with Mobirise ‌

Free Web Page Creator