Contact Me

Use the form on the right to contact me.

Please take into consideration that my days can get pretty busy and my nights can be endless. I will do my best to respond in a timely manner.

Thanks for contacting me.

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

That time the AI almost killed me….

Blog

That time the AI almost killed me….

brennon williams

Ok, the title sounds a little dramatic, and to be fair, it was not so much the AI, but the result of working on the AI and a particular experiment that resulted in me being rushed to hospital.

For a little context, I had been building robotic hardware since 2013 and at the same time, I started to build out a suite of tools and services to enable human machine interaction.

I’d gotten pretty far in terms of generalised interaction with features well beyond what the bigger players had in terms of Siri and Alexa if you consider pure functionality. I already had full continuous fluid conversations and divergent subject capabilities. But I was working in an applied research mode, which in hindsight, made it harder to commercialise and indeed raise funding for.

I also don’t have a PhD and didn’t go to MIT, so investors always got bogged down in the “how are you doing this and how is this working” rather than focussing on the implications that a commercial release of a conversational AI could be.

I guess they know now.

Modern generative AI was not happening at this point, but it was really starting to progress. This concept of statistically guessing the next word in a sentence was such a simple problem in theory but solving it at scale would take the creation of the Transformer, a ton of money and a whole lot of faith that a neural network foundation model could actually be trained cost/power efficiently in order to make the correct model weights return viable outputs.

The real key in my view was the discovery of relative vector distance between data pairs in a multidimensional space. In other words, observing in data that, what Paris is to France is also similar to what London is to England was a ‘fire discovery’ moment.

Back to my experiments.

I was working on a large number of cognitive experiments at the time ~circa 2017, involving language understanding. This was not generative AI as you know it today, but actual understanding of language structures within a model.

At the time, I was performing comparative analysis of compound emotionally weighted sentiment analysis between two large provider models (I won’t name the companies here for obvious reasons). I wanted to assess just how well re-enforcement from many data sources would be, given that a single data assessment of both their models appeared to work pretty well.

Simply put, the more topical data on a specific context or subject, the stronger the confirmation of the analysis. This is akin to Confirmation Bias in human psychology.

After assessing many candidate topics for the experiment, I concluded that I would consider War, Terror and big world events that typically drive highly emotional (and often conflicting) responses in people; found generally in news reporting. In other words, seriously biased data.

I had scraped a decent amount of web data from 2000 to 2010, blog posts, news articles, magazines and all sorts of transcriptions. This was long and boring and extremely tedious work, but it also meant I had a relatively large and clean data set with source metadata.

Within my hypothesis, a key part of understanding, is all to do with contextual and emotion based sentiment and not just relationships between words, phrases and sentences.

How something is said, is just as important, if not more important, than what is said.

It took a several days to run the experiment, given my lack of computing power, but in the end, I was able to query the data set to observe a basic type of understanding from my solution.

I kid you not, the very first query I made was “War”.

The response floored me.

“War” was overwhelmingly assessed as a positive event.

I tried the same with “Terrorism”. Very similar results.

I’ve scoured old notebooks trying to find the exact values returned across the sentiment set, inclusive of emotion analysis (percentage values for sadness, fear and several others) – but I’m not able to find the information given that I’m now living in another country.

When I say “overwhelming”, generalising, I remember the positive values of sentiment being high 90%s and emotional values low teens on negative derivatives such as sadness.

How could this be? WTF???

I checked meticulously for any errors in my code or the way I was compiling and compounding data.

I was forced to run the experiment again.

There was no significant deviation on the resulting analysis given the same data and same queries.

This was a total dick punch.

I needed to know why.

 

Pulling apart the analysis

I started to assess every single response of sentiment and emotion across all the data. I was looking for statements that appeared out of place alongside the model responses.

Aided with wall-to-wall whiteboards, I was following a path that became more and more ominous as I went.

You need to remember that I was several years into this work at this moment in time, and the quest to solve these questions and theories I had, totally consumed my every waking hour. For years.

I had been awake for more than 3 days at this point when I discovered the underlying cause of the problem.

Not just in a single sentence, but most prominently was an event that had occurred in 2003 when then US President George W Bush had declared “Mission Accomplished” on a televised speech referring to the war in Iraq.

Being such a widely broadcast speech, it was reported heavily in news media and of course, most of the stories were full of positive based statements, but crucially, those statements were alongside the terms “war” and “terrorism”.

This idea of weighting a model based on strength derived from replication and compounding similar values (based on the theory that we learn from replication) – was all at once destroyed.

The results may have been technically accurate, but the implications were horrendous because just like Confirmation Bias in people, no amount of data would really change the result, certainly not data in the wild that an AI could or would most likely be trained on.

An AI that thinks war and terrorism is good - instantly, the understanding of how extremely dangerous an AI could be, was very real to me.

Naturally, I began to rack my brain for how to “fix” this.

By the time I realised it could not be fixed, my nose had started to drip blood on my floor in front of me.

 

Cooking myself.

I remember clearly being utterly exhausted. I do not remember much of driving home, but I do remember the tunnelling effect I was seeing inside the ambulance but not understanding what they were saying, and almost a feeling of shutting down, like a computer. Then I remember waking in a hospital bed 20 odd hours later, not knowing what day it was or what time of day it was.

I was indeed suffering from extreme exhaustion alongside a brain bleed, which somehow, I had managed not to have a major stroke.

As an insomniac, I’ve been given this superpower to stay awake and function highly for sometimes days on end, but this was certainly one step too far.

I had to stop for a little while and rebalance. I had broken my brain.

 

When is enough, enough?

In early 2018, I did not yet have a viable alternative, even in theory of how to approach the understanding problem.

Every angle I considered would start out promising, but a serious problem of defragmentation can occur in a neural like system, where just like before, a compounding error ripples through the underlying models and data, making it always a very brittle, unsafe system in reality. Cognitive dissonance results, regardless of the guardrails put in place.

What I did know, was that this was important to solve, because only an AI that could actually understand the words (or any modal input for that matter, sight, hearing, touch), could comprehend the cause and effect of its actions, even of its own thoughts and or simulations of thoughts.

Beyond simple reasoning (which in reality is what we have now with LLMs), an AI could be educated (which is different than training) on the importance of peaceful resolutions, safety in assessment, care and empathy for interacting in a human centric society.

I had journeyed to Microsoft in Seattle to meet with some folks about various things and almost through chance, I had some drinks with a relatively senior fellow shall we say, involved with Microsoft’s AI work. When I explained what had happened, and how the approach being taken by them and other industry players would yes - lead to automation (potentially dangerous automation), but not actual artificial intelligence, well things got frosty - real fast.

While I’ve been engaged in various areas with Microsoft for almost 30 years, I’m definitely a ‘nobody’, so not surprising to get that response.

There is also a lot of people that are a hell of a lot smarter than me working on it all, so why would my experiences hold any weight at all? They didn’t and they still don’t. And that’s ok.

I was so completely spent; dejected and had enough of pushing against a wall of commercial interest held above scientific discovery and human safety (not just at Microsoft, but other companies as well) – that I knew I really needed to take a significant break.

My older brother was living in LA at the time, so I decided then and there to hire a nasty big V8 Charger and drive the longest way possible down the coast road from Seattle to LA.

I drove that thing like it was stolen.

Music playing, coffee next to me, chain smoking, singing out loud and seeing the ocean water.

Then it hit me.

 

Finding the solution

I was listening to a Cindy Lauper song on the radio that I had not heard since I was in school.

Not only could I remember the lyrics, but I could remember the beat, I could remember being at a disco, I could remember what I was ridiculously wearing (it was the 80s after all), and I could even remember the smell of the dance hall. Even the words in the lyrics that I couldn’t remember, I could predict, just in time.

Importantly, I understood how I remembered it (from a neurological perspective), I recognised the shape of the data, the variance and overlap, and even more importantly how to represent that information mathematically.

I pulled the car over at a scenic lookout. I wrote page after page of what I had just figured out. A true epiphany if you will.

It was a map of conscious states, data acquisition and retrieval techniques – a whole host of stuff that theoretically could represent comprehension and a serious approximation of consciousness.

Certainly, an architecture that I think a so called AGI could be based on.

But…

I had just made a decision to stop. And I knew for my own sanity, that I needed to honour that agreement with myself.

 

What I did next…

I looked up from the car seat and saw the ocean.

The peace it gave me at that moment in time, is hard to write into words. I do remember breaking down and crying, and then laughing, because I felt finished.

It was like being reborn. Like I was free of the torment that had plagued my mind for almost a decade at that point.

I just knew at that moment, that I needed to be out there, in the ocean.

No noise. No distraction. No conflict.

Later that same year, I would become an offshore yacht master and commercial boat captain. I would find and buy an old boat in Malaysia, sail it to Thailand and figure the rest out later.

 

I’ve continued to code a lot, but on my own terms. Having continued to also consult as a CTO, I see just how troubled a lot of companies are, especially those with non-technical leaders or limited technology in their commercial DNA.

I’m heavily into the current wave of generative AI with a new startup, but my hypothesis still niggles at me from the back of my head.

I might scratch that itch again soon, but not without some serious levels of funding. Any takers?

I see and hear the folks at organisations like OpenAI talk a lot about safety, and I genuinely believe they are trying to do the right thing. They are amazing people, that have also thought deeply about the potential challenges, and I certainly respect and celebrate their discoveries and mostly the fact that they are inspiring a new wave of people to tackle these very hard problems.

In saying all of this…

I do hope the world doesn’t come to exclusively rely on the imperfect nature of prediction but considers the importance of artificial understanding and not artificial intelligence, to be above all other challenges we face, even nuclear or biological.

I do fear for where we may end up, in an unchecked spiral of unintended consequences.

We are safe, only when the systems we build, understand.