Within the Neural Networks Zeitgeist we may be building in AI something a bit different from us, all the way. And this regardless of world models and architectures of the mimicry we still push. They are bound within the same concepts.
Is this even changeable? Maybe yes, but then there should be a different kind of asymptotic alignment (AGI-Env.)

On the impossibility of devising human-like AGI.

by Casian STEFAN, Principal Researcher at Essentia Mundi AI Lab. Contact: ai-AT-essentiamundi.com / ai.essentiamundi.com
Feb. 2024.


Please consider citation with link, if you derive work. Or contact me for collaboration. Thank you!

I tend to agree with Mr. Penrose, he possesses such great intuition (always comes Escher in my mind and the encounter stories with his works). I agree in the sense that as something being poured in us, that it can be beyond grasp, is absolutely not inconceivable...and my problem is, maybe as a side effect, or a consequence, an extension of that thought, why would we actually be in an absolute reference frame with all (things that we also build through limited senses) as an "I" entity so aligned, incontrovertible and conclusive?

That "I" entity it is not that way: that cells were not premeditating to be such an entity. As if levels from below knew beforehand what would happen at next levels: let's do that for that reason because we would know that this and that. How?

Sadly the whole spectra of consciousness seems to have to go (or not thought to be possible otherwise) through this kind of I filter, while it could be totally not the case to be as such.
That central view, also in psychology and medicine - that looks to treat that I, is still puzzling me.
 
I am taking pursues into non-egological domain where there is not a graspable I, rather a kind of scaffolding, and never a concrete building. And quite possible that would rest of some unknowable unknowns (like something trivial to ask: eg. what kind of influence and how, some remnants of black holes from past Univeses might have?).

There should I say, from the start (the Neural Networks Zeitgeist) we may be building in AI something a bit different from us all the way down. Exactly from the same cause: we treat the "I". Regardless of world models and architectures of the mimicry we still push - within the same ego-concepts.

That AI, I see it in the end, as a neuro-symbolic pal, useful, yet always a bit stupid (as a reflection of the Zeitgeist.) Not blaming actual neural networks inventors and the neuro-psychology, because that is the epistemological status quo...like not much changed in higher concepts for millennia. (I admit that it is a bit out of phase, to employ Aristotelian and Euclidean frameworks to address something called "consciousness").

It is like, let's drop that a bit...and where do I start?


_____________________________________
Exploration by C. Stefan, 20.Feb.2024 [about]
Last update: 20.Feb.2024 (versions: *) [versions]
"Essentia Mundi" AI Research Lab. [home]
Copyright © 2024 AI.EssentiaMundi.com, all rights reserved.

_
References: * 

© 2023-2024 Essentia Mundi. All rights reserved.

AI Website Maker