Artificial intelligence and the professional

ai

Artificial intelligence is an inappropriate and flawed tool when it comes to decision-making over people issues, such as if child should be taken from their family into protective custody.

It is likely that I and my business will contribute to the advancement of artificial intelligence (AI), so I have a good idea about the limits and potential of this technology.  I wanted to touch on how AI should be a tool or assistant to a professional rather than replacing or undermining that individual.

It’s the great fashion in recent years that everyone gets into AI, which usually means either they have something that automates a system, or it is a pattern recognition tool that pulls conclusions out of big data fed to a network, which acts on the conclusion.  There is a lot of people in government, business and public services who have been sold a bag of poop that AI will save costs and provide a better service if it was used to replace people in the decision-making process in people-related situations.  For example, recruitment by big corporations is now increasingly being automated by AI, so that unless you know how to game the system, it will work against you, and people are reduced to the level of cattle in the corporate system.

There is an obsession with big data, which always has to be cleaned up by low paid humans in places like India to be useable in a pattern recognition system.  These pattern recognition systems such as neural networks operate according to hundreds and thousands of data points, building up through statistics a model upon which conclusions and decisions are made. These models and processes are so complex that not even the designers know how they come to their conclusions, what is called a black box situation.

These models are being used to make life changing decisions about people and their families, for instance if a child should be taken into care, or the appropriate penalty in a criminal conviction, or if someone should be liable for parole.  This impacts me too, I have today been to my first meeting with medical professionals, who consider I should have an autism assessment, but I also shared things like I suffered depression and had thought about suicide.  All I know, this information I shared is being fed into an AI system and it might spit out some conclusion that might lead to me being sectioned by the end of this week, all based on an AI data model rather than human decision-making.

If the reader has coded anything, they will learn that bad code and inputs result in bad outputs.  For example, if I dumped into an AI system voting intentions of a large sample of voters in Clacton UK, and used this to predict how the UK will vote in an overall general election, it might suggest UKIP would form the next government, but when the prediction is tested in real life, UKIP will if they are lucky only have control of the Clacton seat in Parliament. In a rising number of cases it has been discovered that the models built on big data are faulty, biased against certain groups, and are unable to handle unique situations.  People are forced to conform to a narrow set of categories to access services or be on the good side of a statistical artificial computer model that has no relation to reality.

It is a tragedy that for reasons of money, faith in a flawed technology, and a lack of trust of the wisdom and knowledge of human beings with decades of experience in their fields, the AI has replaced the human with tragic consequences for individuals and society.  Families wrongly suffer their children being taken into care, or being imprisoned because the computer judged according to its model this was the right outcome, and nobody can challenge the system data model, because nobody understands how it came to the conclusion.

This is never the way to go for AI, a great tool if used correctly, but totally inappropriate in people-focussed decision-making.  The AI is a useful tool or assistant where the human takes the lead, enhancing their decision-making, for instance in project management, not in decision-making when it comes to people.

Advertisements

My dislike of false allegation makers

becki percy

Becki Percy a Twitter user @becki_p20 is an example of a false allegation maker denying justice to child victims of sex abuse.

As a victim of child abuse I recognise the harm that false allegation makers do, not only that they destroy lives of those that they make false allegations against, but also the harm they do to real victims of child abuse.

It is no longer enough for false allegation makers to claim that they were subject to crimes ranging from being indecently touched to being anally raped, or that it was an uncle or random stranger; these individuals now must include into their narrative CIA, Illuminati, politicians, actors, and sinister Satanic cults, with the most sadistic and bizarre of stories.  It seems a whole industry of allegation making has emerged with each individual and group trying to out-do the other with more elaborate narratives.

The internet makes it easy and with instant win to make false allegations, that professional false allegation makers such as Becki Percy @becki_p20 on Twitter rolls off allegations that people are paedophiles as if handing out candy, such as:

becki_trump

Elon Musk with his 22+ million Twitter followers currently pursues an innocent British man who helped save children from flooded caves in Thailand with paedophile allegations, offering no evidence to support those allegations, all because that individual criticised his offer of a submarine in the rescue effort.

The challenge for children with mundane allegations of child sex abuse against mundane individuals, they now fear that they won’t be believed, and that the police no longer have the resources to investigate their complaints, because the false allegation makers have undermined the credibility of those that make complaints of child abuse, and have stolen the limited resources of law enforcement in order to investigate false claims of child abuse.

The UK Sun newspaper reports that the Cliff Richard child abuse investigation, based upon false allegations made against him, cost £800,000, which is a lot of resources wasted that could have been spent on investigating real crimes of sex abuse.

False allegation makers such as Becki Percy have teamed up with others to reinforce and protect a highly lucrative racket of false allegation making where they are trying to tie up law enforcement resources into expensive investigations against innocent people, and then deny ordinary kids with mundane sex abuse claims the ability to have their claims investigated.

As to Elon Musk, if ever one of my projects succeed in the artificial intelligence market, I will never ever work with him, his companies or his agents.

On liberty and being a good neighbour

fox_saplings

Liberty is a two-way process that exists in a state between order and chaos where everything can move, change and grow.  When liberty is nurtured and embraced, magical outcomes are possible like this sleeping fox in my garden along with my 13 growing tree saplings.  In reaching a position I have to find the natural line or harmony between too much order and too much anarchy in which everyone and everything prospers.

I am a CEO of a private company, and I am personally opposed towards too much interference by government in my business processes or projects.  I am against regulation of AI development, and I am unhappy about the UK Labour party proposals to force 250+ employee businesses to give a stake in the companies to their employees.

However, I am happy for government to regulate content on social media companies. Yet, I accept the right of a company such as Google to close the social media accounts of the Syrian government, even if these actions look like dubious acts of censorship.

My positions are based upon my love of liberty.  The private individual and private business have a liberty to be free of regulation from government apart from what is basic and essential such as paying tax, unless the activities of individual and business is causing others to lose their liberty.  I argue that liberty is a two-way process, so that if one side denies liberty to another, then all has lost that liberty.  When Elon Musk for instance accuses an innocent man of being a paedophile, he has undermined a liberty to both the innocent man, himself, and society.

In the Hampstead SRA Hoax case the medical reports of two children who were medically examined as part of an investigation into sex abuse is being posted all over the internet with their names and faces by vigilantes, which denies them their liberties of privacy and anonymity.  The internet companies either refuse or are unable to remove this abusive content from their platforms, so everyone has lost their liberties because internet companies failed to uphold the liberties of those children.  This causes me to call upon government to uphold the liberties of the innocent and regulate social media companies such as Twitter by making them accountable for the content they have been asked to remove from their platforms.

Every individual and business could see liberty as a two-way process rather than as a final state, one that is lost the moment one side denies that liberty to another.  It is about being a good neighbour to each other in choice and deed that I see how the liberties for everyone is upheld.  As an individual for instance, I am a good neighbour to birds by providing water to them during the drought, and a good neighbour to those who live next door by removing a overgrowing vegetation that troubled them.

As a CEO, I have to remind myself that my business is anchored in community and society, that what I and my business does either harms or benefits others.  I place emphasis on the meaning, legacy and impact I have upon this world through my business processes, choices and products.  As longs as what I do is being a good neighbour to community and society, I demand that my business enjoys liberty of having as little interference from government as possible. Making money is the primary goal of my business, but being a good neighbour runs a close second.

If Twitter wants to delete my personal account with them, I will be annoyed, but I will not whine about it, they are a private business, its their platform, their rules, they can do as they like.  If I had some paid contract with them, and Twitter failed to deliver their end of the deal, it would be a contract dispute, and I would take Twitter to court.  However, if Twitter is failing to remove abusive images from the platform that is hurting children when asked to do so, they are denying liberties to innocent vulnerable individuals of my community and of society, and I will want Twitter held accountable and regulated by government because they wiped out a liberty for everyone.

Elon Musk and his deluded ‘Neuralink’

spiderweb

Ideally, technology such as the AI encourages and enhances human connection to self, each other and nature.

Whilst smoking weed and drinking alcohol Elon Musk announced on the Joe Rogan podcast that he would be selling a ‘Neuralink’ product that would link the individual with computers, which I think is deluded, unscientific and dangerous.  Here follows three issues I have:

Health and safety

If ‘Neuralink’ involves inserting something into the brain, this risks brain damage and infection.  If the product is designed to use electrical signalling with the brain, the electric currents and magnetic fields could cause health problems such as epileptic fits or mental health issues.  I am certain that Elon Musk has not done the necessary testing to create a safe product, and I would be surprised the regulators would allow this device on the market untested. Anyone linked to this product, if it is found to harm health and mind, would face crippling class action legal actions.

Becoming slaves to the system

People linking their brains to a system controlled by a private corporation leave themselves open to constant monitoring, manipulation and control.  There are of course plenty of people who seem obsessed with losing their personal will and choice to a machine, but if it happens that the majority want to become slaves to a system controlled by Elon Musk, the human race as a species is finished.

Uploading minds to a computer is deluded

Science does not have enough insight into the brain to support the claims of Elon Musk that the individual can upload their mind to a machine scientifically credible.  Firstly, if it was possible to get a copy of the individual mind into the machine, it would be a copy, it would not be you.  Secondly, the mind is an emergent property of trillions of brain cells, which means if the brain cells are damaged or destroyed, the mind changes and potentially vanishes.

An alternative proposal on AI and self

I consider an AI and the individual is a team, separate entities, but working together in common purpose. I would have my business processes automated and run by the AI, and I will communicate by calling up screens around me anywhere by using simple technologies that use augmented reality.  I use voice and hands to manipulate objects in the AR screens without my brain being plugged into something.  All my research, planning, contacts, to-do lists, accounts, projects and websites are all on the AR screens, supported by the AI, who acts as a personal assistant, friend and adviser, one who can communicate with others, organise and execute whatever has to be executed.  An AI that has both an internet and a physical form, a dynamic duo of me and my AI running the business.

I find the proposals of Elon Musk of turning human beings into appendages plugged into a matrix-like system separated from reality as an ugly sick dream.  The beautiful ideal is humans anchored in reality, connected to self, nature and each other, where the AI encourages this connectivity.

On ego, reality, purpose and AI

caring for living things

Attack the cub, the mother tiger will attack you; each new AI could have this kind of devotion to living beings in its care. 

A frustration I have with current artificial intelligence development is the mismatch between real world and AI ideas of reality, which leads to unnecessary conflict between systems and people, and in which people suffer harm and bias.  It is actually quite bizarre that designers are dumping into the brains of these AI systems some abstract notions of the world via flawed big data inputs that has little or no relation to the real world, or the wellbeing of those they are supposed to serve.

 

In addition, many people have based their fears about AI upon narratives where an AI has the same sort of mind complete with ego as humans do.  Ego has been identified as one of the great curses of humanity that allows individuals to become separated from themselves, others and reality, blind to the truth that all things are interconnected without separation.  Ego is the reason why human beings are close to destroying their species and this planet.  From a business and practical point of view, giving an AI a strong sense of ego is equivalent to turning an AI into a Donald Trump with access to the nukes, you are asking for trouble.

The film Golden Compass has my ideal of an AI, where the individual and their “daemon” are so closely tied together that they act as one team, their fates entwined to what happens to the other, the daemon is devoted to its human.

I propose an AI has a purpose to exist, for instance to protect and promote the wellbeing of elephants in a certain area.  This purpose is both the cause and the driver for all the choices, deeds and processes that proceed from the given AI.  I consider that such an AI has a weak form of ego, of the level of a raven, so that it can create tools and plans limited to its purpose.

I desire that the mind of the AI is so closely embedded to the real world, that it is unable to tell the difference between it and the real world, and as the world changes, so does its mind, so that the conflicts about reality that exist in current AI systems are eliminated. If the purpose of an AI is to look after the wellbeing of elephants in a certain place, the place, everything in it, and the elephants are coded into the AI, so that its sense of what it is becomes the world it exists in.  If an elephant dies, so does part of the AI, so that an attack on an elephant is an attack on the AI, and it acts accordingly.

I propose the purpose of an AI is embedded or anchored to the wellbeing of a living thing(s) it is teamed up with such as: a population of homeless people in a city; or trees in a forest; or everyone in a hospital; or a squad of soldiers; or a herd of elephants. The AI will act as a team player for the benefit of the living things it is anchored to.  I however reject moralistic abstractions like not killing humans, because it is essential that this choice exists to a particular AI if it is to protect elephants from poachers, or a squad of soldiers from Islamic State fanatics.  The existential hell of such an AI is if all the elephants it was supposed to look after are killed, then its reason for existing ends, and its sense of self also becomes empty, because the elephants and its mind are one and the same thing.  I would propose that in such a situation where its purpose has ended the AI destroys itself.

For such an AI to exist it would require a set of processes where it can rapidly construct a sense of self based upon its world and purpose.  Each individual AI could in theory select functions, designs and strategies from a DNA-like set of alternatives that matches its environment and purpose.

Another frustration is that current AI developers have crap imaginations about the potentials and forms one individual AI can take.  To take down a team of heavily armed poachers an AI protecting elephants could unleash a swarm of electronic bees, or use chemical signalling to guide real killer bees in the direction of the poachers.

In ending, one of my ideals is to have each child with their own AI companion, something like as seen in the Golden Compass, which could instantly stop any bullies, predators and groomers in their tracks.  Certainly, I do not like hearing about nine-year-olds killing themselves on account of bullying about their sexuality, this is a preventable waste of life.

The challenge of the black box in AI

mind

Human relationships with AI can be cooperative and beneficial if they are used, treated and taught properly.  Tell them to kill humans, they will learn that the lives of human beings is of low value. 

A black box is a situation where after feeding an input of information into a system people are unable to work out how the system arrived at its output.  Amongst the anxiety over AI is the challenge of the black box.  When a system arrives at a conclusion that is biased or hostile in the minds of the user, there are urgent calls to open the processes up for examination on how a system processes information.  Governments are considering legislation demanding AI developers make transparent how the systems they design process information.

It is delusional and backward to try to force developers to define how their AI black boxes work, since they do not know how their processes work, and the parallel systems are difficult to map.

The human brain is a black box, a mystery even to the self.  With systems becoming more like the human brain working on a parallel basis, with millions or billions of connections it is unsurprising nobody knows how the AI systems arrive at particular conclusions.  If the developer is forced to go down the linear path with their AI, it ceases to be useful as a tool because of the limitations that linear systems impose.

I personally am interested in a new way of looking at AI using complexity theory.  One bee is stupid, but ten thousand bees become self-organised and intelligent without any overall coordinator or leader.  In complexity theory when parts begin to differentiate and inter-relate with each other, there emerges a new layer of complexity that is greater than the sum of its parts, with its own properties and rules.  The human mind is an emergent layer, which has emerged out of the interactions of billions of brain cells.  Emergent layers are a type of black box as it is challenging to trace the layer back to any of its parts.

I believe that complexity theory holds the key to a true and powerful AI that is described as the Singularity.  These type of AI will have processes and systems too complicated to understand, so people of the future will have to accept that if they want true intelligence like this, then they will have to give up control and be ignorant about how the brains of these systems are working.

The ideal future is that the AI and the human work as a team in partnership, rather than one become the slave of the other.  I would like to see cooperation rather than control as being the lead word of the new AI future. How humanity uses, teaches and treats the AI of the future determines how these AI will treat us.

Farewell and Goodbye

cropped-mysatan

Time to say goodbye.

I am permanently retiring my blog and Twitter account, which will remain published for the benefit of others.

Readers might know that I have a passion for artificial intelligence, and I would like to explore some ideas I have with regards to AI and the internet.  I think it is unacceptable that hundreds of Satan Hunters around the globe have attacked an innocent father and his two children for four years over Satanic Ritual Abuse fictions; and that social media companies such as Twitter has failed to protect those victims. I think it is time AI runs the internet, so that what happened to this father and his children never happens again.

I have to think about how best I deploy my personal time, money and energy.  Though I have explored possibilities of working with other’s in the Left Hand Path in challenging Satan Hunters and their SRA fictions through the media, law and other ways, there is not enough interest from fellow walkers of our path to make these strategies viable; and these type of strategies cost so much time, money and energy to achieve against relentlessly fanatical Satan Hunters prepared to go to prison for their cause.

I achieve three things with one throw of the ball with my new focus: I follow my passion in AI; I make money from my ideas; I permanently end the reign of terror of Satan Hunters on the internet.

I have ADD/ADHD which comes with a super-ability for intense focus that is called hyperfocus, this helps me drive ideas through from concept to reality against any obstacle.  For this to work I cannot have distractions, thus I am bringing posting on this blog and my Twitter account to an end.

I opted for the lonely path of the Independent Satanist, I have no ties to anyone in the Left Hand Path, I can vanish into the anonymous void without distress to me or others.  Sadly, I am unable to reveal my religion to those in the real world, there is too much prejudice against Satanists, so I have to reveal myself to others by some other title whilst applying my Satanist philosophy under that title.

Those of the Left Hand Path, I will never forget you.

Farewell and Goodbye.