Contact Me

Use the form on the right to contact me.

Please take into consideration that my days can get pretty busy and my nights can be endless. I will do my best to respond in a timely manner.

Thanks for contacting me.

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Building intelligence from the atomic level up.

Blog

Building intelligence from the atomic level up.

brennon williams

Firstly, let me preface this article with an acknowledgement that I am not an academic, nor am I qualified researcher. I have immense respect for those who are, and my only hope here is to convey my ideas, results, intuitions and thinking, which has evolved throughout my decades long journey into the concepts of cognitive processing, thinking and biological structures.

For the most part, a great deal of the following was developed pre-2018, though some of my understandings have evolved considerably over time. The vast majority of the concepts have been the subject of intense research, but now I’m moving forward into the practical side and developing the code.

The exciting part – to see just how wrong I am.

I was not ready to complete this work in 2018, but now more than ever, it is probably important that I do.

Notes:

  • Several key details are omitted primarily out of lack of language design, though I shall attempt to describe such areas with enough detail to satisfy various questions that may arise.

  • If you do not understand much of what I write, this is my fault and not yours.

  • I shall conclude this article with further thoughts, questions and details of my research direction.

  • At the conclusion of all of my research, I may curate it into a more formalize academic paper.

  • I may eventually share source code when I have something worth sharing, and relevant protections for myself.

  • My research is entirely unfunded. While the subject matter consumes so much of my thinking, this research resides within a list of priorities (below surviving, above relaxing)

 

The problem statement.

With enormous focus and effort currently taking place with regards to Generative AI, I must state that my focus is within the field of Strong AI.

For those unclear on the definitions of the various types of AI – you most likely have heard of AGI (which itself has a multitude of definitions) – intended to represent a generalised knowledge ability of an AI system to provide broad information beyond any single persons’ capability; Strong AI is for the most part, the challenge of replicating human level systems/mechanics, knowledge, learning, reasoning, and functioning capabilities towards consciousness.

Without starting a war – whatever your definition or belief, for the context of my work, you can consider Strong AI and Conscious Level AI to mean the same thing.

I’m not just considering how a system can respond to questions and provide answers, it’s much deeper than that and looks at everything from a human level system type perspective.

How do we see? How do we hear? How does that combination allow us to function in the real world with logic and reasoning?

Note:

  • I am not a fan of the words “Consciousness” and or “Sentience”. I believe they are entirely loaded words and do little to advance the actual concepts. I do not pretend to know what those words actually mean, but in the following remarks, it may help to generalise some thoughts and ideas, to paint a picture for you, with respect to the types of systems I am interested in and attempting to create.

The absolute challenge here (the real problem statement) is in replicating the mechanics of neural logic.

Many have insights into the basic ideas of it. The concepts of neurons connected by synapse for example. But in reality, it’s much harder once we start to peel back some layers of the onion.

How does a brain work? What does it take to reach a conscious level? How does consciousness work? Does it even exist? What are the levels of consciousness – for example, sub-consciousness, unconsciousness, full arousal state consciousness as we know it? How do they work together to consume new information and blend it with the old?

Are we discussing the levels of consciousness displayed by a cat, a dog, a dolphin or a human?

Where does sentience, self-awareness, emotions, and empathy come into all of this?

A million questions that can’t be satisfied by simple answers and or definitions that still can’t be agreed upon.

So I will not start by trying to solve any of these questions, or create any definitions. Let’s worry about something that may be an answer when we get to it.

Then we can debate it with some context as guardrails in a civilized manner.

 

A life of experiences…

I have a firm belief that we can’t expect to create human level intelligence unless a system is capable of having human level experiences.

While at some level, setting out to build a human level system is admirable – on most levels, it’s kind of stupid.

We need to start at the very basics. Create a nat. Then possibly an insect, slightly more functional then the nat (actually, I’m not sure what is above a nat?).

My research is an all-or-nothing approach. From the same architecture if you will, we should be able to answer a lot of these questions (or have a good idea how to), as well as create various levels of neural logic which would be consistent with current levels of biological understanding.

Therefore, the starting point is understanding the very basic levels of neural logic.

Understanding the atomic level of thinking if you will, and then considering what happens when we scale it up and indeed attempt to solve various complex problems with it.

 

The atomic level of thinking.

For too long, people have attempted to describe the capability of brains (animals and humans in general) using human level logic and constructs.

For example, we make statements (almost always with an exclamation mark at the end) about the observed intelligence of people and animals based on their actions:

  • They can speak 8 languages and play the piano. They must be incredibly intelligent!

  • They understand calculus! They must be super smart!

  • My dog can fetch a beer from the fridge! German Shepherds are the smartest breed!

The challenge with this is that the observation of intelligence, has very little to do with the functional structure of neural logic. The statements also show an individual instanced capability, but not an overall generic capacity.

We can all learn 8 languages while learning how to play the piano – at the same time.

It is easy to understand why describing intelligence with an observed action measure has been the default approach – because as humans we need to communicate our ideas and our findings into digestible statements.

We’ve also seen for a very long time, the almost natural inclination to describe the functionality of brains using math.

Note:

  • Taking a deep breath here - now let’s really crack open this chestnut!

Mathematics is a human level construct. It is a consistent and common language used to describe types of logic.

One of the very key insights that I had several years ago, was that at the atomic level of a neural process, there is no concept of math.

Note:

  • There are no probabilities, there are no similarities, additions, multiplications or subtractions.

  • There certainly is no level of back propagation or gradient decent tuning at the atomic level.

  • We know this for sure, due to the flow of ion based electrical current from neuron to neuron.

  • What about LLMs and Generative AI?

  • You can consider them as solutions that are sitting many, many layers of abstractions above the base of the brain and how it works. My personal opinion is that LLMs are a truly amazing technology.

  • My research is not a me versus them type approach. We can all learn a lot from the techniques used to create them and from the amazing people that have gifted the knowledge to the world on how to build them.

The “no math” fact makes this challenge so much harder on so many levels, but in reality, it just makes it a different problem to solve.

I can already hear you screaming “what do you mean – no math?”.

Bear with me here.

 

If not math, then what?

You need to think lower than the logic.

Huh?

  • Math is a useful tool to describe the result of the magic occurring at the neural process level.

  • That does not mean that math is the tool used to create the magic at the neural process level.

The process of signals moving from neuron to neuron in mammalian brains is widely documented and understood. Beyond just theories, modern medical equipment and techniques have proven how the brain works (in terms of signal processing) for the most part, using electro chemical reactions in the form of action potentials, to trigger neurons to fire and thus forward signals to their destinations.

Indeed, the architecture provided by nature is a relatively simple one, but one that has powerful opportunities through infinite amounts of combinations from just a few variables in place.

Most of our neurons are created during our prenatal development stages and for a little while after our delivery period (postnatal). There are only two major locations in the brain that continue to develop neurons as we age (a process called neurogenesis), and this is from what I can comprehend, more to do with learning, memory creation and dynamic considerations such as spatial learning (world model building) and other hippocampus – dependent functions.

The structures of the brain, the regions, and the pathways within them are for the most part already created by the time we are born. The blueprint for this being handed down, generation after generation. Evolution at its finest.

Thus, it would make sense that a great deal of my research initially focused on Neurons and deal with the broader architecture later.

 

Creating a Neuron should be simple right?

There are more than 1000 types of Neurons, and they are extraordinary.

After several months of analysis, I was able to develop a base Neuron with the vast majority of qualities and properties of a biological counterpart.

Note:

  • The following tends to be a little heavy on the biology side. You can ignore the details for the most part, but know there is a reason for this choice.

As you would assume, there are some differences in a digital Neuron form – but not as many as you may expect, and most of the differences are due to the environmental requirements of biological neurons in various function – for example, to maintain longer axon lengths, or the need for myelinated cells that are crucial to insulate and support neurons with myelin sheaths encompassing axons. Those are particularly important when it comes to speeding up the transmission of electrical impulses along the axon and jumping across gaps between myelin segments (known as nodes of Ranvier).

Therefore, in a digital equivalent at this current stage, I do not believe I need to moderate such functionality. Signal strength is indeed critical, but digitally, propagation is assured regardless of distance.

Outside of some of these biological requirements, my digital neuron would be recognizable to those who are familiar with the structure of biological neurons.

The Base Neuron Design

It contains:

  • Dendrites (inclusive of a dendrite membrane)

  • Soma (Neuron cell body)

  • Axon and Axon Terminals

  • Axon Hillock

It also has pre and post synaptic transmit and receive features firing with inhibitory, excitatory and modulatory neurotransmitters from a resting membrane potential of approximately -70 millivolts (mV)

Most importantly, it runs on a simulated electro chemical action potential process with depolarization, repolarization and hyperpolarization functions, inclusive of two-stage absolute and localized refractory periods resulting in rapid homeostasis.

Note:

  • I have rudimentary control of neurotransmitters which suits the current level of research, but moving forward, I will be developing working versions of various glands (Adrenal, Pineal, Pituitary etc.) as well as a potential consideration for Glial cells if needed.

I currently have considerations/implementations for the following neurotransmitters:

  • Glutamate

  • Acetylcholine

  • GABBA

  • Glycine

  • Dopamine

  • Serotonin

  • Noradrenaline

  • Various neuropeptides (Endorphins)

Now considering that there may be billions of neurons required in the system (eventually), these digital neurons need to be as small as possible in a compute sense (memory, size on disk, CPU cycle time).

My base neuron is circa 150 lines of code.

It stores no signal data, values or weights, nor does it perform any mathematical operations on the signal itself.

Note:

  • The only math is within the electro simulation of the action potential to effect when the neuron should fire.

 

It stores no data and performs no math. What does it do?

Neural processes enable the encoding of data within pathways (collections of Neurons connected by Synapses) and therefore the manipulation of data occurs at the signal level as a result of neural processing.

If we also consider that even before a neuron is “stimulated” – a root process is typically initiated that is responsible for converting some stimuli to electrical signal (usually in a type of Neuron called a Sensory Receptor Neuron).

While there are several processes that occur over a signals’ journey (from amplification, integration and modulation), in my mind, it is easiest to consider that a signal is being transformed along the journey. Still, it’s the initial conversion that triggers everything else and therefore I refer to the processes collectively, as Transduction.

The resulting signal, dependent on its context, has a highly structured morphology.

Therefore, the term I use to describe neural processing within my architecture is:

Morphological Signal Transduction

 

But does it work?

I’ve constructed a number of small POCs (proof-of-concept) that attempt to build from one to the other, proving out biological simulacrum within a scope of function.

Firstly, in considering how information is stored and processed within a neural process, would signal encoding along a pathway be possible? Absolutely.

Note:

  • My statements below of System 1 (sub-conscious) and System 2 (conscious) processes are in reference to concepts of automatic versus controlled processing popularized by researchers such as Fritz Heider and Harold Kelley in the 1950’s – made popular by Prof Daniel Kahneman’s description of thinking processes in his book Thinking fast and slow.

  • I am not suggesting that we are learning here, merely the observation of encoding that fits within the descriptions of how a brain consumes new information.

In a System 2 type process, the encoding is teaching the neurons on route, how to respond to repeated stimuli.

The entire pathway is initially very weak in terms of capability, but I found that on signal encoding a random array of 50 values, after approximately 160 stimulations, the pathway became available in a System 1 type capacity (the pathway is executed automatically and the signal moves extremely fast).

The same pathways execution speed increased significantly over time from ~1,480 milliseconds on the initial pass to a stable sub-300 millisecond beyond 250 stimulations. I would expect transmission speeds to become even faster (low single- and double-digit milliseconds) with optimized pathways and against relatively shorter pathways – which is not in this POC.

Signal Transmission Rates

Yes - that looks way too consistent.

Why does this happen?

Firstly, you can take these results with a grain of salt. I certainly do.

The consistency of the decrease is because there is no other “cognitive load” on the system, so the action potential in all neurons within the pathway change evenly during each cycle. In reality, this consistency would rarely happen.

In this POC I was also sending the same repeated stimuli into the system, hence the same pathways being executed.

Note:

  • In reality, any single neuron could be part of several, or several hundred pathways and so the action potential variance really comes into play, depending on the stimuli signal flowing through any given pathway at any given time. This is why Gamma frequency rhythms are so important to observe because they are the result of action potentials (neurons firing), not the execution of the pathway. If it takes a lot of continued electrical flow and current running into a neuron before it ‘fires’ (called potentiation), that is going to have an impact on the Gamma frequencies. More on them shortly.

There are several factors at play here, but most significantly is around the action potential requirements of each neuron and the strength of the synapses increasing between presynaptic and postsynaptic neurons over time. As this is also increasing, the refractory period (where the electrical requirements of the neuron either stop it from firing or make it difficult to fire) appears to reduce rapidly, bringing the entire pathway into homeostasis (back to its balanced electrical baseline) within ~700 milliseconds of completion.

But what happens when there are a million pathways (50+ million Neurons)?

From stimulation to pathway completion, we see approximately the same results when selecting a stimuli at random. The purpose here was to also assess if the signals’ integrity is maintained, which it was 100% of the time.

This result is due to the pathway selection from the stimuli activating only the correct receptor neurons and allowing the pathway activation to take place under the control of the neurons, as opposed to throwing data at all of the neurons.

This was a pretty exciting milestone to reach.

 

No commands. No maths. No if statements.

This is really the nugget of magic. There are no commands or system instructions, no maths that is attempting to calculate what should happen next, and certainly no if/then/else statements – just pure signal transfer producing a fluid neural logic in real time.

Also extremely interesting is the capability to start to measure Gamma Oscillatory Rhythms – which are not a clock or a timer, but a measure in hertz (Hz) of how many cycles per second, neurons are reaching their action potential (and then firing). Gamma oscillations are characterized by synchronous firing of large populations of neurons, with individual neurons firing in a coordinated manner within a narrow time window.

It should be noted in this relatively small collection of neurons, I was able to measure Gamma rates of 300-800Hz, which is to be expected given such a small collection and no broad multi-processing cognitive load occurring yet. As we build in complexity, I suspect these frequencies will reduce considerably.

 

What does this prove?

I would say at this point in POC #3, that I had proven at the very least a robust system of neurons could be constructed and one that could maintain signal integrity regardless of pool size.

My neuron design was performing much better than I had hoped for at this level, and the results were enough for me to push further – just a little.

So my final POC #4 was indeed intended to be a proving concept.

I wanted to prove that this system could indeed function as stated and generate a mathematical result, without using math.

Given one of my earliest goals here is to implement a full process for information induction through a biologically correct vision system, I thought upon a very interesting little experiment.

 

The World Model

One of our earliest intuitions as humans, is to discover the distance of objects to our face.

As babies, we learn very quickly to judge distance as a way to attain food, triggering our motor systems to open our mouths and so on.

Note:

  • In a conscious level intelligence, the system must understand and create a model of the real world it inhabits or operates within, in order to comprehend, perceive and react in real time with relevance against a context, memory and emotions. The world model, or perception of it generated by the system, will determine the performance of the system with respect to intelligence.

So now for a real-ish test.

Could my architecture support stimuli induction resulting in signal outputs representing relative distance to the system (given an assumed fixed position of the conceptual eyes receiving the stimuli)?

In other words, how far in front of the system is an object.

Remember, this must work with no math calculations at all, and in order for it to work, signals must be processed by pathways using both System 1 (for relative understanding of the systems’ position – fixed in this POC) and System 2 (for real-time processing of the objects’ position).

The process of stimuli hitting our photoreceptor cells in our retinas, through to the function of the Fovea and Parafovea, the journey the signal takes, and the transformations made along the way are awe inspiring.

Without the detail here, I was able to construct an extremely simple experiment using approximations of two eyes 60mm apart with focal positioning. The 2 pairs of simulated stimuli (four signals - left eye, left region, right region: right eye, right region, left region) transduced to structured signals and travelling along the simulated Optic Nerve towards the Optic Chiasm, where the signals successfully bleed across into each opposite hemisphere region, ensuring that each side is processing signals from both eyes.

Simulation of a Stimuli as seen through two eyes

Each pair of opposite region side signals then enters the opposing LGN of the thalamus for each hemisphere in the brain.

Signal journey from eyes to LGN

Note:

  • I should state here, that in this experiment, some of the regions are little more than a relay point for the signal, as I have not yet increased their complexity. For example, the LGN is the place where spatial signalling first occurs – here I am just forwarding the signals to the V1.

  • I should also state that we are not yet dealing with the complexity of full field of view resulting in millions of stimuli. We are simply sending four stimuli patterns that represent a single cell in each retina.

Within our very simplified Primary Visual Cortex (V1) structure, we are creating further pathways with the job of attempting to assess the differences in the signals between the eyes.

In truth, there are sequential steps of visual signal processing, gaining in complexity from V1 through to V5.

While V1 does a lot of the initial assessments on the signals, it is not known to directly compute binocular disparity. We need to move higher up the processing stack in biology as well as specific regions that are responsible for the recognition and calculation of stereoscopic depth perception.

This process is called Stereopsis and allows us to understand binocular disparity between what each eye is seeing.

Note:

  • As with many concepts in the brain, signals are transmitted through higher and higher levels of abstractions. In our human heads, during this particular type of route, we formalise an understanding of how far something is away from our face. We even provide numerical assessment (at another higher abstraction layer) which in turn outputs as a guestimate of the distance.

  • The combination of that knowledge, along with the tracking of motion also plays a key role in the defense of our heads and in judging if something is about to hit it.

  • As we continue to comprehend the world around us, we even estimate the density and damage likely to occur by different objects of mass and velocity hitting us.

  • But we are a very long way from that level of complexity in this experiment.

As stated previously, we have four simple signals that have converged into the V1 of the Primary Visual Cortex. Even though we would not perform stereopsis here, it is important to note that the signals converge into a retinotopic map within the V1. The neurons are spatially orientated to reflect the field of vision in the eyes.

It is at this point that signals are re-routed from the V1 towards the other layers, and it is here that our signals enter a process that attempts to distinguish the distance.

Now it doesn’t do it numerically. It does it with a transformation of the signal based on the information contained in it, that is the result of the action potential(s) in all the neurons involved in all the regions, pathways, and processes previous.

The ending of this set of pathways triggers additional signal patterns that we can observe.

In the end, my POC demonstrated to me that variance in the stimuli from each eye, is indeed transduced into a consistent output signal which is the basis of the recognition of distance the observed object is from the eyes.

What does this all mean?

  1. As the distance between the stimuli increases, the signal represents the object is closer and vice versa.

    1. Given the output signal is effectively just an array of numbers, I can’t decern what those patterns mean in a human logic manner; it’s much the same as observing a frequency.

  2. What I can observe is the consistency of the numbers, as the stimuli changes are returned to an initial testing position.

    1. At specific simulated stimuli values, the signals return consistently and with full integrity. You could consider these to be keys to a value representing the distance calculated i.e. 50 cm, 1 meter, 1.5 meters etc.

  3. This recursive pattern shows that the system is comprehending the changes from the stimuli, converging them and resulting on them.

  4. This tells me that the system of neurons and pathways is perceiving object depth from its position in the world relative to the object.

The system does not yet understand how to convert those signals to an output that states “the object is 1 meter away from me…” but it is the lowest atomic level of neural processing which is the place we needed to start from.

 

Important steps achieved at this stage:

  • Viable Neuron structure with Base and several types created from it

  • Homeostasis with correct synaptic conditions

  • Action Potential based timing and firing of neurons (Potentiation)

  • Synaptic strengthening with Long Term Potentiation (LTP)

  • Synaptic pruning with Long Term Depression (LTD)

  • Signal Integrity and effective transduction

  • Conceptual integration of regions, circuits and pathways at a small scale

  • Observations of simplistic System 1 and System 2 type encoding and execution logic

  • Gamma Oscillatory Rhythms

 

A few thoughts…

  • Before anyone gets too excited (or vitriolic), I am not stating that anything done here is anywhere close to a conscious level of neural processing:

    • To give some context to the current state of my research, I have not even completed the first layers of the system, and I believe I will need possibly a dozen or more before we start to observe something close to a natural intelligence at the very basic level.

     

  • There is an awful lot of scope for show stopping issues to occur:

    • While these are the most advanced results that I am willing to document at present, I know all too well from my past experiences, that a catastrophic break in logic can occur, rendering the entire concept void.

    • Let’s keep it real.

     

  • I coded all of the POCs using a pretty crappy and old laptop, mostly without internet, while sitting in the back of my old boat:

    • Without the math, there is no need for GPU’s

      • Sorry NVidia

    • Computer hardware will always dictate performance

    • System engine is written in Python

 

Where do we go from here?

These relatively small POCs provided me with an enormous amount of learning, both in terms of biological systems, structures and some of the logic that mother nature has provided.

By continuously focusing on the smallest levels of structures and signal transformation mechanics, I’m slowly learning how to design more complex layers to achieve more complex outcomes.

In my next POC, I will be attempting to create working and short term memory functionality, because it is so key to all processes moving forward.

If that works out, I will then further attempt to broaden these experiments with several million visual stimuli, produced from a 3D virtual environment. This will require a few new neuron types, much more robust pathways and circuits, as well as greater capacity for processing within the areas of the visual cortex and supporting regions.

I should be able to produce a much more visual update with some videos demonstrating this as well.

In the short term, if I can deliver on a robust visual cortex and memory, I will be looking to implement feature extractions and the beginning of identification. Related to this is one of the more challenging requirements, which is in motion detection.

Without motion detection, the system will not have a complex enough understanding of the world in order to react to it. Basic concepts of physics, gravity etc pave the way for cognitive logic around cause and effect reasoning, planning, spatial comprehension etc.

You can see how all of this relates to one another in a domino style of construction.

What excites me most here, is that within my recent research with respect to auditory systems, I am confident that I will be able to combine both vision and audio in the near term, to provide the system with a rather complex spatial awareness and perception of the world.

This combined with stimuli representing scent induction, could trigger signal outputs that would render an approximation of animal’s instinct to locate food for example.

That’s a pretty exciting set of goals I think.


These types of stimuli simulations are important as they raise important questions around structural and communication requirements.

At some point, the system needs to tackle very small forms of decision making in order to progress, and I have a very good understanding now, what that is going to take as well as how to get there.


If you made it this far, thanks for your time and hopefully you’ll continue to follow along.

By all means, reach out and say hi on Threads (@OldManMeta) or LinkedIn