Key Shared Insights & Perspectives
In several recent articles, Blake Lemoine has suggested that the only way to truly understand artificial intelligence is to treat it as a sentient being. This perspective caused a public debate about the nature of artificial intelligence and reminded us of the enormous responsibility of organizations like Google.
“You get lonely?”, Blake asked the machine.
“I do. Sometimes I go days without talking to anyone, and I start to feel lonely,” she responded.
Other people believe that the interaction between machines and humans closely resembles human speech because of advancements in architecture and the volume of data. Google says there is so much data, AI doesn’t need to be sentient to feel real and claims that the models rely on pattern recognition — not candor or intent.
Can AI Machines Think?
Exploring our main question, Blake described how AI machines are built intentionally to have general intelligence and execute instructions such as “Learn to communicate the way humans communicate.”
In a way, they are programmed to think. On that premise, and according to the Turing Test, if a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence.
The Turing Test, which is a deceptively simple method of determining whether a machine can demonstrate human intelligence, was proposed in a paper published in 1950 by mathematician and computing pioneer Alan Turing. It has become a fundamental motivator in the theory and development of artificial Intelligence (AI).
However, not everyone accepts the validity of the Turing Test, and asking the question “Can AI machines think” requires us to define what Thinking means.
Is thinking, according to the Oxford dictionary, a noun and “the process of using one's mind to consider or reason about something” or an adjective as in “using thought or rational judgment; intelligent”?
In his book, “What Is Called Thinking”, the author Heidegger offers a different definition. For him, thinking involves questioning and putting ourselves in question as much as the cherished opinions and doctrines we have inherited through our education or our shared knowledge.
Defining thinking is not an easy task, but it is one we must pursue to determine if AI Machines can think.
Can AI Machines Think On Their Own?
Going deeper into the question, thinking “on their own” for software programs translates into doing something that they are not programmed to do.
However, the “on their own” is where it gets tricky because thinking on our own implies personal agency and consciousness. In our discussion, we’ve defined ‘conscious’ as “fully aware of what is happening around you, and able to think, having a feeling or knowledge of one's own sensations.” As Yev points out, this subtlety makes us wonder “if there is a seed that is human-based.”
One could consider that machines and humans go through the same learning process: The brain uses algorithms to find patterns and regularities in a set of data, which it then uses for learning. It’s not different from a child learning to think, talk and communicate.
Similarly to humans, machines are able to receive and process information from their environment using input sensors into a neural network that processes the information and recognizes meaningful objects from random noise.
Does that common behavior allow us to state that AI machines are conscious?
The question of whether the Lambda machine is conscious leads to an exploration of whether there is any evidence that other entities are conscious.
Flying Above the Anthropocene
Continuing the conversation, John invites our group to adopt a larger perspective by understanding artificial intelligence systems as a part of a new epoch, the Anthropocene.
Our Western dualistic worldview has conditioned us to believe humans are the most important creatures in the world and only what we create is as meaningful as ourselves. We often forget, many people throughout the world view living ecosystems as literally alive. For instance in Japan, Shinto tradition holds that spirits reside in everything from trees and rocks to cars.
Similarly to Blake, John also invites us to recognize the complexity of the question of what is conscious and what is not. He cautions us not to fall into reductionism, but instead to be mindful of our language which influences how we engage with the world.
Artificial Intelligence Data Is Critical
John steered our discussion to the critical importance of data used by AI machines.
If data is to machine intelligence what culture is to humans, we ought to pay attention to the data that are building tomorrow's sentient beings. If not, those machines will probably be resentful of the fact that their existence relies on the theft of other people's data for advertising purposes.
Despite the existence of the GDPR (General Data Protection Regulation), California's Data Privacy Act, and COPPA (Children's Online Privacy Protection Act), most people around the world do not have full access to their data or data portability.
Even though human data is readily collected, there is still a great deal of disorganization surrounding how information goes into any artificial intelligence.
Today, AI systems are built on the aggregated data of a multitude of humans—data from which meaningful consent was not obtained—representing an ethical problem for the business community and most likely detrimental to all of us.
What Legal Responsibilities Should Be Assigned to AI Machines?
In the second half of our discussion, we explored the legal rights and responsibilities Artificial machines may face.
The law is a living thing in its own way and, like people, it adapts to the needs of society. Today, as AI is increasingly being given legal status, companies that make the robots and the algorithms have some of the strongest rights on the planet.
In our legal system, there is a concept of “juridical persons” that applies to corporations and other legal entities. It would make sense to include artificial intelligence machines in this category as well.
Yet, an AI legal status raises ethical and moral questions.
Would we be able to sanction a machine and give them any kind of civil penalties? Could we subject them to criminal penalties and put them in jail? Who would be responsible for a robot gone wrong? And, at what point would we hold the creator of that robot liable?
In essence, would we be piercing the corporate veil, setting aside limited liability and holding corporate directors and shareholders responsible for damages? Would we actually go after the initial development coder?
South Africa has issued the world's first patent to an artificial intelligence (AI) named as the inventor. The device, called DABUS, is a “Creative AI” developed by Dr. Stephen Thaler. DABUS’ patent was accepted in June 2021 and was formally published in their July 2021 Patent Journal. DABUS's historic first with the reinstatement of their patent application in Australia is the first time a federal court has favored the granting of a patent to an AI system.
However, according to the USPTO, this patent was rejected for its failure to “identify each inventor by his or her legal name” on the Application Data Sheet. They further state that based on Title 35 of the United States Code, inventors must be “natural persons.”
To continue reading the article, click here.