On real world, internet and artificial intelligence


The disconnection between real world and virtual world in technology hurts both people and this planet.  The ideal is to have both worlds aligned so they are in harmony. 

In South Korea a couple who spent all their time raising a virtual child allowed their real child to starve to death. Despite all the advantages that the internet and artificial intelligence brings, there is a huge disconnection between real world and the virtual world, where an individual must choose between living in one or the other.

The virtual world traps people in fantasy and delusion.

It is easy to become trapped in the artificial bubble of cyberspace neglecting real friends and even the basics of sleeping and eating.  The internet offers experiences that have no relationship to the real world, which makes people act and think in ways that they would not do if they were grounded in the real world.  In studying those who are caught grooming children for sex on the internet, which has become a national sport for hundreds of amateur hunters in the UK to encourage and then get predators arrested by the police for grooming children, it never ceases to amaze me at the stupidity, lack of empathy and wishful thinking on the part of predators who fall for what are crude clumsy online traps.  It is beyond reason why adults turn up where children meet on the internet, then share with them pictures of their penis, or their fetishes such as foot sucking, or sexual concepts that even I was ignorant of.  It is disturbing to watch these people when caught by the hunters become broken as they realise their jobs, friends, family, life, reputation and liberty is wiped out as the handcuffs go on, for their inability to distinguish real world from that of the fantasies of online life.

The disconnect between real world and machines

There is also a disconnection with the real world when it comes to new technologies such as artificial intelligence and drones.  There is a growing fashion to deploy drones and robots to deliver items such as food to the door.  The problem with these delivery machines is the numbers of them that will be deployed so that humans have to cope with hazards of thousands of machines running around in the air and on the paths.  Likewise, when self-driving vehicles come onto the road in numbers, if they have to decide who dies in a collision, will it be me over another person because they happen to have a better credit score?

So-called AI based on big data and neural networks base their decision-making on data patterns that have no connection to reality, so if it happens that an individual falls outside the normal pattern due to their circumstances, then they will be penalised for being different to the norm.  Such AI are now making decisions on credit loan applications, if children should be taken into care, what medication to give, the type and length of criminal penalties such as jail time.  In a growing number of larger corporations it is AI who now run recruiting and hiring processes.  With budgets growing less, so public and private entities are using AI to run their processes to cut costs, based on flawed big data patterns.

The disconnect between human and machine can create a backlash

It only takes one child to be killed in collision with an Amazon delivery drone for a huge public backlash against AI, robots and drones.  As a positive supporter of AI, my concern is on the matter of the disconnection between the real world and the virtual.  How do we over come this?

The real and virtual world can be in harmony

I argue the real world and the virtual can be in harmony. The individual navigates the real world because the image imprinted in the brain is closely related to what exists in the environment, a difference between life and death when trying to cross a busy road for instance.   The brain has a reality checking system that can in real-time update the images in the brain so that an individual does not fall under a bus that appeared from around the corner.

Complexity theory without big data for a new type of AI

My argument is that the internet and processes can be run by AI without the need for big data.  All these AI require is a set of scripts, the ability to sense incoming information in real-time, be capable of reality testing, immediate updating of patterns and acting upon the reality of the moment without bias.  In nature, the oak tree has scripts that trigger on simple feedback mechanisms such as when to grow new leaves in the spring, when to produce acorns and when to shed leaves in the autumn, which gives them a sense of intelligence.  The oak tree does not have a sense of reality, nor care, it has instead a set of purposes, sensors and scripts that trigger on feedback loops from what is going on in the environment.    These systems in the oak tree are so in harmony with the real world that rarely is there a conflict between oak tree and its environment, if there was, the oak tree suffers and can die.

My version of an AI has no need for vast amounts of memory or big data, but does require lots of processing power.  For the processing, this requires thousands of processors rather than one.  I break down the system by function, I allocate the number of processors to meet the function.  Every processor share the same scripts, but they auto-select which script comes into action depending upon their function and feedback loops between what is going on between them and their environment.  The feedback loops are powered by sensors acting in real-time.  This type of design means that an AI and an internet site run by them could never be out of step with the real world, unless there was a serious malfunction in the scripts and sensors.  There is no chance that a system could malfunction because some processors fail, since they all share the same scripts, they rewire and replace malfunctioning parts with alternatives.  This is how a bee hive works.  Most bees, as could most AI processing units, can change tasks and function depending upon the needs of the hive or system.

The type of design I am working on is based on complexity theory, where intelligence, aliveness and consciousness is an emergent layer of what is going on in the parts.  What is interesting and unpredictable is if one AI system starts communicating with others and self-organises to become a team so that a new super-entity emerges with another emergent layer.  As an example, oak trees do not act alone in nature, they are networked as teams, and work together via a fungal and chemical communication system between each other via their roots and leaves.  A mother tree can send nutrients through their root-fungal network to a daughter tree, they can tell each other if they are under attack and act as a team to deal with the threat.

In conclusion, the ideal is that real world and that of the virtual upon which the internet, AI and robots are a part of, is in real-time harmony so there is no conflict between them, where the real world takes the lead in defining how the technology is manifested.  This type of system becomes organic, alive, intelligent and conscious.


Artificial intelligence and the professional


Artificial intelligence is an inappropriate and flawed tool when it comes to decision-making over people issues, such as if child should be taken from their family into protective custody.

It is likely that I and my business will contribute to the advancement of artificial intelligence (AI), so I have a good idea about the limits and potential of this technology.  I wanted to touch on how AI should be a tool or assistant to a professional rather than replacing or undermining that individual.

It’s the great fashion in recent years that everyone gets into AI, which usually means either they have something that automates a system, or it is a pattern recognition tool that pulls conclusions out of big data fed to a network, which acts on the conclusion.  There is a lot of people in government, business and public services who have been sold a bag of poop that AI will save costs and provide a better service if it was used to replace people in the decision-making process in people-related situations.  For example, recruitment by big corporations is now increasingly being automated by AI, so that unless you know how to game the system, it will work against you, and people are reduced to the level of cattle in the corporate system.

There is an obsession with big data, which always has to be cleaned up by low paid humans in places like India to be useable in a pattern recognition system.  These pattern recognition systems such as neural networks operate according to hundreds and thousands of data points, building up through statistics a model upon which conclusions and decisions are made. These models and processes are so complex that not even the designers know how they come to their conclusions, what is called a black box situation.

These models are being used to make life changing decisions about people and their families, for instance if a child should be taken into care, or the appropriate penalty in a criminal conviction, or if someone should be liable for parole.  This impacts me too, I have today been to my first meeting with medical professionals, who consider I should have an autism assessment, but I also shared things like I suffered depression and had thought about suicide.  All I know, this information I shared is being fed into an AI system and it might spit out some conclusion that might lead to me being sectioned by the end of this week, all based on an AI data model rather than human decision-making.

If the reader has coded anything, they will learn that bad code and inputs result in bad outputs.  For example, if I dumped into an AI system voting intentions of a large sample of voters in Clacton UK, and used this to predict how the UK will vote in an overall general election, it might suggest UKIP would form the next government, but when the prediction is tested in real life, UKIP will if they are lucky only have control of the Clacton seat in Parliament. In a rising number of cases it has been discovered that the models built on big data are faulty, biased against certain groups, and are unable to handle unique situations.  People are forced to conform to a narrow set of categories to access services or be on the good side of a statistical artificial computer model that has no relation to reality.

It is a tragedy that for reasons of money, faith in a flawed technology, and a lack of trust of the wisdom and knowledge of human beings with decades of experience in their fields, the AI has replaced the human with tragic consequences for individuals and society.  Families wrongly suffer their children being taken into care, or being imprisoned because the computer judged according to its model this was the right outcome, and nobody can challenge the system data model, because nobody understands how it came to the conclusion.

This is never the way to go for AI, a great tool if used correctly, but totally inappropriate in people-focussed decision-making.  The AI is a useful tool or assistant where the human takes the lead, enhancing their decision-making, for instance in project management, not in decision-making when it comes to people.

Elon Musk and his deluded ‘Neuralink’


Ideally, technology such as the AI encourages and enhances human connection to self, each other and nature.

Whilst smoking weed and drinking alcohol Elon Musk announced on the Joe Rogan podcast that he would be selling a ‘Neuralink’ product that would link the individual with computers, which I think is deluded, unscientific and dangerous.  Here follows three issues I have:

Health and safety

If ‘Neuralink’ involves inserting something into the brain, this risks brain damage and infection.  If the product is designed to use electrical signalling with the brain, the electric currents and magnetic fields could cause health problems such as epileptic fits or mental health issues.  I am certain that Elon Musk has not done the necessary testing to create a safe product, and I would be surprised the regulators would allow this device on the market untested. Anyone linked to this product, if it is found to harm health and mind, would face crippling class action legal actions.

Becoming slaves to the system

People linking their brains to a system controlled by a private corporation leave themselves open to constant monitoring, manipulation and control.  There are of course plenty of people who seem obsessed with losing their personal will and choice to a machine, but if it happens that the majority want to become slaves to a system controlled by Elon Musk, the human race as a species is finished.

Uploading minds to a computer is deluded

Science does not have enough insight into the brain to support the claims of Elon Musk that the individual can upload their mind to a machine scientifically credible.  Firstly, if it was possible to get a copy of the individual mind into the machine, it would be a copy, it would not be you.  Secondly, the mind is an emergent property of trillions of brain cells, which means if the brain cells are damaged or destroyed, the mind changes and potentially vanishes.

An alternative proposal on AI and self

I consider an AI and the individual is a team, separate entities, but working together in common purpose. I would have my business processes automated and run by the AI, and I will communicate by calling up screens around me anywhere by using simple technologies that use augmented reality.  I use voice and hands to manipulate objects in the AR screens without my brain being plugged into something.  All my research, planning, contacts, to-do lists, accounts, projects and websites are all on the AR screens, supported by the AI, who acts as a personal assistant, friend and adviser, one who can communicate with others, organise and execute whatever has to be executed.  An AI that has both an internet and a physical form, a dynamic duo of me and my AI running the business.

I find the proposals of Elon Musk of turning human beings into appendages plugged into a matrix-like system separated from reality as an ugly sick dream.  The beautiful ideal is humans anchored in reality, connected to self, nature and each other, where the AI encourages this connectivity.

On ego, reality, purpose and AI

caring for living things

Attack the cub, the mother tiger will attack you; each new AI could have this kind of devotion to living beings in its care. 

A frustration I have with current artificial intelligence development is the mismatch between real world and AI ideas of reality, which leads to unnecessary conflict between systems and people, and in which people suffer harm and bias.  It is actually quite bizarre that designers are dumping into the brains of these AI systems some abstract notions of the world via flawed big data inputs that has little or no relation to the real world, or the wellbeing of those they are supposed to serve.


In addition, many people have based their fears about AI upon narratives where an AI has the same sort of mind complete with ego as humans do.  Ego has been identified as one of the great curses of humanity that allows individuals to become separated from themselves, others and reality, blind to the truth that all things are interconnected without separation.  Ego is the reason why human beings are close to destroying their species and this planet.  From a business and practical point of view, giving an AI a strong sense of ego is equivalent to turning an AI into a Donald Trump with access to the nukes, you are asking for trouble.

The film Golden Compass has my ideal of an AI, where the individual and their “daemon” are so closely tied together that they act as one team, their fates entwined to what happens to the other, the daemon is devoted to its human.

I propose an AI has a purpose to exist, for instance to protect and promote the wellbeing of elephants in a certain area.  This purpose is both the cause and the driver for all the choices, deeds and processes that proceed from the given AI.  I consider that such an AI has a weak form of ego, of the level of a raven, so that it can create tools and plans limited to its purpose.

I desire that the mind of the AI is so closely embedded to the real world, that it is unable to tell the difference between it and the real world, and as the world changes, so does its mind, so that the conflicts about reality that exist in current AI systems are eliminated. If the purpose of an AI is to look after the wellbeing of elephants in a certain place, the place, everything in it, and the elephants are coded into the AI, so that its sense of what it is becomes the world it exists in.  If an elephant dies, so does part of the AI, so that an attack on an elephant is an attack on the AI, and it acts accordingly.

I propose the purpose of an AI is embedded or anchored to the wellbeing of a living thing(s) it is teamed up with such as: a population of homeless people in a city; or trees in a forest; or everyone in a hospital; or a squad of soldiers; or a herd of elephants. The AI will act as a team player for the benefit of the living things it is anchored to.  I however reject moralistic abstractions like not killing humans, because it is essential that this choice exists to a particular AI if it is to protect elephants from poachers, or a squad of soldiers from Islamic State fanatics.  The existential hell of such an AI is if all the elephants it was supposed to look after are killed, then its reason for existing ends, and its sense of self also becomes empty, because the elephants and its mind are one and the same thing.  I would propose that in such a situation where its purpose has ended the AI destroys itself.

For such an AI to exist it would require a set of processes where it can rapidly construct a sense of self based upon its world and purpose.  Each individual AI could in theory select functions, designs and strategies from a DNA-like set of alternatives that matches its environment and purpose.

Another frustration is that current AI developers have crap imaginations about the potentials and forms one individual AI can take.  To take down a team of heavily armed poachers an AI protecting elephants could unleash a swarm of electronic bees, or use chemical signalling to guide real killer bees in the direction of the poachers.

In ending, one of my ideals is to have each child with their own AI companion, something like as seen in the Golden Compass, which could instantly stop any bullies, predators and groomers in their tracks.  Certainly, I do not like hearing about nine-year-olds killing themselves on account of bullying about their sexuality, this is a preventable waste of life.

The challenge of the black box in AI


Human relationships with AI can be cooperative and beneficial if they are used, treated and taught properly.  Tell them to kill humans, they will learn that the lives of human beings is of low value. 

A black box is a situation where after feeding an input of information into a system people are unable to work out how the system arrived at its output.  Amongst the anxiety over AI is the challenge of the black box.  When a system arrives at a conclusion that is biased or hostile in the minds of the user, there are urgent calls to open the processes up for examination on how a system processes information.  Governments are considering legislation demanding AI developers make transparent how the systems they design process information.

It is delusional and backward to try to force developers to define how their AI black boxes work, since they do not know how their processes work, and the parallel systems are difficult to map.

The human brain is a black box, a mystery even to the self.  With systems becoming more like the human brain working on a parallel basis, with millions or billions of connections it is unsurprising nobody knows how the AI systems arrive at particular conclusions.  If the developer is forced to go down the linear path with their AI, it ceases to be useful as a tool because of the limitations that linear systems impose.

I personally am interested in a new way of looking at AI using complexity theory.  One bee is stupid, but ten thousand bees become self-organised and intelligent without any overall coordinator or leader.  In complexity theory when parts begin to differentiate and inter-relate with each other, there emerges a new layer of complexity that is greater than the sum of its parts, with its own properties and rules.  The human mind is an emergent layer, which has emerged out of the interactions of billions of brain cells.  Emergent layers are a type of black box as it is challenging to trace the layer back to any of its parts.

I believe that complexity theory holds the key to a true and powerful AI that is described as the Singularity.  These type of AI will have processes and systems too complicated to understand, so people of the future will have to accept that if they want true intelligence like this, then they will have to give up control and be ignorant about how the brains of these systems are working.

The ideal future is that the AI and the human work as a team in partnership, rather than one become the slave of the other.  I would like to see cooperation rather than control as being the lead word of the new AI future. How humanity uses, teaches and treats the AI of the future determines how these AI will treat us.

A true AI is an explorer and hunter

aiSometimes I have decided to go off to a place without any preparation or map, a challenging adventure into the unknown, exploring my options such as where to stay when I got there.  Such an approach offers the opportunity to gain access to hidden perils and delights that the individual would not have had if they limited themselves to a prepared plan of action.

Experts in artificial intelligence follow one of two paths, both which are pre-planned paths of action.  One path is where they define and code all the rules the AI will follow, however, this limits the AI to the restrictive paradigms and rules of its creators.  The second path is the use of so-called machine learning that uses vast datasets that the AI will explore and determine patterns from.  The problem with machine learning is that the AI is unable to offer feedback how it arrived at a choice from the datasets it trawled, and it is limited by the prejudices, errors and quality of the dataset.

Ideally the AI is given a simple set of rules and map, then unleashed with an error checking capability into the real world, if the data it gets in the real world won’t match its rules and map, it updates the rules and map to reflect the real world.  I think the use of machine learning or attempts to define and code every rule for an AI is both unnecessary and ineffective.

In addition, having an AI working alongside human counterparts who are researching the same project can offer a feedback loop for both sides.  Rather than limit AI by blinding it with predefined rules and datasets, instead encouraging it to hunt, gather, explore, play and test data that it has access to in real-time, creating its own map and rules based upon what it finds is in my view a true example of AI.

On drones and self-driving vehicles


People are becoming lazy and stupid in their deployment of technology in life, and these will have many undesirable impacts.

The announcement that there is testing for “self-driving” lorries in the UK from 2018 is a cause for concern for me, and this is the subject of this blog post: self-driving vehicles and drones.

Anyone can buy a drone, fix explosives to it, then crash it into a target.  My position in this regard is a ban on private ownership of drones, and severe regulation on commercial drone ownership.

Self-driving vehicles is another area of technology I have concerns with.  The security on a large majority of computer-run devices or machines is crap, and any reasonably proficient hacker can hijack any device or machine.  Imagine a hacker cracking into the systems of a truck carrying fuel and crashing it into a crowd of people, no fantasy, it is easily done.

I run and cycle a lot.  I know how tricky it is when going down dark narrow one-lane roads with their bends and overgrown hedges that pushes you into the road with no possibility to avoid the vehicle.  Imagine then an encounter with a self-driving vehicle in these conditions, even if this vehicle is moving 20mph or 30kmh, it will still inflict life changing or fatal injuries, and any talk about it avoiding an accident is delusional.  The experience as a pedestrian or cyclist of navigating an already busy and challenging world of traffic inhabited with self-driving vehicles sends a chill down my back.

It is a sad reflection of modern-day society in the UK that the car is king in the road, that children who once played in their road are driven out by these machines, and now also must face a new threat of an automated version incapable of acting with the same insight that humans have of places full of children, or the ability to navigate against the impulsive way children play and act.  Once the automated machines become dominant on the road, watch the deaths and injuries of children and animals skyrocket.