On real world, internet and artificial intelligence

ai

The disconnection between real world and virtual world in technology hurts both people and this planet.  The ideal is to have both worlds aligned so they are in harmony. 

In South Korea a couple who spent all their time raising a virtual child allowed their real child to starve to death. Despite all the advantages that the internet and artificial intelligence brings, there is a huge disconnection between real world and the virtual world, where an individual must choose between living in one or the other.

The virtual world traps people in fantasy and delusion.

It is easy to become trapped in the artificial bubble of cyberspace neglecting real friends and even the basics of sleeping and eating.  The internet offers experiences that have no relationship to the real world, which makes people act and think in ways that they would not do if they were grounded in the real world.  In studying those who are caught grooming children for sex on the internet, which has become a national sport for hundreds of amateur hunters in the UK to encourage and then get predators arrested by the police for grooming children, it never ceases to amaze me at the stupidity, lack of empathy and wishful thinking on the part of predators who fall for what are crude clumsy online traps.  It is beyond reason why adults turn up where children meet on the internet, then share with them pictures of their penis, or their fetishes such as foot sucking, or sexual concepts that even I was ignorant of.  It is disturbing to watch these people when caught by the hunters become broken as they realise their jobs, friends, family, life, reputation and liberty is wiped out as the handcuffs go on, for their inability to distinguish real world from that of the fantasies of online life.

The disconnect between real world and machines

There is also a disconnection with the real world when it comes to new technologies such as artificial intelligence and drones.  There is a growing fashion to deploy drones and robots to deliver items such as food to the door.  The problem with these delivery machines is the numbers of them that will be deployed so that humans have to cope with hazards of thousands of machines running around in the air and on the paths.  Likewise, when self-driving vehicles come onto the road in numbers, if they have to decide who dies in a collision, will it be me over another person because they happen to have a better credit score?

So-called AI based on big data and neural networks base their decision-making on data patterns that have no connection to reality, so if it happens that an individual falls outside the normal pattern due to their circumstances, then they will be penalised for being different to the norm.  Such AI are now making decisions on credit loan applications, if children should be taken into care, what medication to give, the type and length of criminal penalties such as jail time.  In a growing number of larger corporations it is AI who now run recruiting and hiring processes.  With budgets growing less, so public and private entities are using AI to run their processes to cut costs, based on flawed big data patterns.

The disconnect between human and machine can create a backlash

It only takes one child to be killed in collision with an Amazon delivery drone for a huge public backlash against AI, robots and drones.  As a positive supporter of AI, my concern is on the matter of the disconnection between the real world and the virtual.  How do we over come this?

The real and virtual world can be in harmony

I argue the real world and the virtual can be in harmony. The individual navigates the real world because the image imprinted in the brain is closely related to what exists in the environment, a difference between life and death when trying to cross a busy road for instance.   The brain has a reality checking system that can in real-time update the images in the brain so that an individual does not fall under a bus that appeared from around the corner.

Complexity theory without big data for a new type of AI

My argument is that the internet and processes can be run by AI without the need for big data.  All these AI require is a set of scripts, the ability to sense incoming information in real-time, be capable of reality testing, immediate updating of patterns and acting upon the reality of the moment without bias.  In nature, the oak tree has scripts that trigger on simple feedback mechanisms such as when to grow new leaves in the spring, when to produce acorns and when to shed leaves in the autumn, which gives them a sense of intelligence.  The oak tree does not have a sense of reality, nor care, it has instead a set of purposes, sensors and scripts that trigger on feedback loops from what is going on in the environment.    These systems in the oak tree are so in harmony with the real world that rarely is there a conflict between oak tree and its environment, if there was, the oak tree suffers and can die.

My version of an AI has no need for vast amounts of memory or big data, but does require lots of processing power.  For the processing, this requires thousands of processors rather than one.  I break down the system by function, I allocate the number of processors to meet the function.  Every processor share the same scripts, but they auto-select which script comes into action depending upon their function and feedback loops between what is going on between them and their environment.  The feedback loops are powered by sensors acting in real-time.  This type of design means that an AI and an internet site run by them could never be out of step with the real world, unless there was a serious malfunction in the scripts and sensors.  There is no chance that a system could malfunction because some processors fail, since they all share the same scripts, they rewire and replace malfunctioning parts with alternatives.  This is how a bee hive works.  Most bees, as could most AI processing units, can change tasks and function depending upon the needs of the hive or system.

The type of design I am working on is based on complexity theory, where intelligence, aliveness and consciousness is an emergent layer of what is going on in the parts.  What is interesting and unpredictable is if one AI system starts communicating with others and self-organises to become a team so that a new super-entity emerges with another emergent layer.  As an example, oak trees do not act alone in nature, they are networked as teams, and work together via a fungal and chemical communication system between each other via their roots and leaves.  A mother tree can send nutrients through their root-fungal network to a daughter tree, they can tell each other if they are under attack and act as a team to deal with the threat.

In conclusion, the ideal is that real world and that of the virtual upon which the internet, AI and robots are a part of, is in real-time harmony so there is no conflict between them, where the real world takes the lead in defining how the technology is manifested.  This type of system becomes organic, alive, intelligent and conscious.

Advertisements

On complexity theory

spiderweb

Complexity theory considers the world and nature as a set of systems, and creates solutions to challenges based upon how parts of the system are connected and communicate.

There are moments when I feel like that fellow who told his colleagues that if they washed their hands fewer patients they operated on would die, they did not believe him, and eventually had him locked up in a mental asylum for saying these “crazy” things.

 

The “crazy” things I talk about is complexity theory, a set of thinking tools that treats the world and nature as systems. These tools are an alternative to the linear reductionist tools used by everyone to solve their problems.

I also feel like that guy who offers people two pills, the blue or red one, the red pill being complexity theory, which wakes the individual to a different way of looking at reality.  Even though complexity theory makes perfect sense and offers a diversity of new solutions in which to tackle the challenges of life and this world, for most people this appears to be too much of a leap, they select the blue pill, and go on thinking and doing things as they have done before.  I am kind of stunned that in nine of every ten situations, people will always go for the blue pill, and complexity theory remains something strange and unknown to most people, even though this decades old set of tools could be the primary way of problem solving to political, economic, social and environmental challenges.

Complexity theory considers that nature and our world is a network of systems, that to offer a practical solution to a challenge it is better to look at how parts of the system are connected to each other, and offer solutions based upon the connections of the system rather than one part.  As an example, the homeless crisis in many cities is the product of a swarm of inter-related issues, but decision-makers will only offer a small number of proposed solutions to address one or two issues without any understanding how this impacts the system as a whole.  It is no use for instance jailing a person for sleeping in a doorway in a city centre, more homeless will come along to replace them, and the jailed individual will be back sleeping in the doorway when they leave jail.  Neither does kicking homeless people out of a place, as they move on to become an issue in another part of a city.

Because decision-makers rarely offer solutions to challenges based upon systems, they create a cobra effect, making the issue worse with their solution.  Cobra effect is named after a solution that the rulers of India offered to the issue of people being bitten by cobra snakes; they offered a reward for every dead cobra; so enterprising people set up cobra farms to breed cobras; when the rulers realised the scam, they stopped the scheme; the cobra breeders stopped making money, set their snakes free, leaving India with more cobra snakes than they started with.

I could of course stamp and scream in indignation at the choice of most people, especially the thinkers and decision-makers, of rejecting complexity theory in their planning and execution of solutions to the challenges of society, but I could instead see this as an opportunity to make a pile of money by offering products and solutions that nobody else offers based on complexity theory.  Their loss, my gain.

The challenge of the black box in AI

mind

Human relationships with AI can be cooperative and beneficial if they are used, treated and taught properly.  Tell them to kill humans, they will learn that the lives of human beings is of low value. 

A black box is a situation where after feeding an input of information into a system people are unable to work out how the system arrived at its output.  Amongst the anxiety over AI is the challenge of the black box.  When a system arrives at a conclusion that is biased or hostile in the minds of the user, there are urgent calls to open the processes up for examination on how a system processes information.  Governments are considering legislation demanding AI developers make transparent how the systems they design process information.

It is delusional and backward to try to force developers to define how their AI black boxes work, since they do not know how their processes work, and the parallel systems are difficult to map.

The human brain is a black box, a mystery even to the self.  With systems becoming more like the human brain working on a parallel basis, with millions or billions of connections it is unsurprising nobody knows how the AI systems arrive at particular conclusions.  If the developer is forced to go down the linear path with their AI, it ceases to be useful as a tool because of the limitations that linear systems impose.

I personally am interested in a new way of looking at AI using complexity theory.  One bee is stupid, but ten thousand bees become self-organised and intelligent without any overall coordinator or leader.  In complexity theory when parts begin to differentiate and inter-relate with each other, there emerges a new layer of complexity that is greater than the sum of its parts, with its own properties and rules.  The human mind is an emergent layer, which has emerged out of the interactions of billions of brain cells.  Emergent layers are a type of black box as it is challenging to trace the layer back to any of its parts.

I believe that complexity theory holds the key to a true and powerful AI that is described as the Singularity.  These type of AI will have processes and systems too complicated to understand, so people of the future will have to accept that if they want true intelligence like this, then they will have to give up control and be ignorant about how the brains of these systems are working.

The ideal future is that the AI and the human work as a team in partnership, rather than one become the slave of the other.  I would like to see cooperation rather than control as being the lead word of the new AI future. How humanity uses, teaches and treats the AI of the future determines how these AI will treat us.