NewsTech & Family

AI: The Singularity and the Ethics of Robotics

AI: The Singularity and the Ethics of Robotics

Imagine a world where machines can think for themselves. What if they think we’re the problem? If so, what will they do? This encapsulates our fears of the singularity, a concern that keeps scientists and ethicists up at night. Well, rise and shine folks; for better or for worse, we’re a little too far gone to pull the plug on the geeks now. They’ve mashed joints, screws, and bolts and voila, the robots are coming. So what does that mean for us? Our society? Their… Society?

In our previous blogs concerning AI, we explained what AI really means and what it can accomplish. In this blog we’ll cover the hair-raising ways in which AI systems and robots will be mankind’s metallic counterparts, how eerily human they’re becoming as well as how inhumane they may become, and how we should design, operate, and treat them. We’ll discuss the rules and guidelines AI developers are being advised to follow to prevent the singularity. What’s the singularity, you ask? Only the point or event in which artificial intelligence has evolved beyond human intelligence and control, possibly placing their right to life and superiority above mankind’s. In summary, these robots will either journey with us to our promised utopian paradise, or send us to our early graves. Seems as good a time as any to revisit the ever-complicated ethics of robotics.

DNA Human Genetic Code

Our Human Genetic C0D3

There is no depth within our minds and bodies too deep for our curiosity’s wish to understand and explain its workings. The more we identify and define what makes us human, the more our predisposition to diseases and disorders, natural internal processes, behaviors, and actions appear to be governed and predetermined by our “coding.” Since its initial “completion” in 2003, we’ve made remarkable strides in understanding and deciphering the human genome or DNA. We can now identify an embryo’s gender, its most likely eye color, or even whether or not the developing child will have certain disabilities.

One key aspect of human complexity is parallel processing – the ability to process information from multiple senses and brain regions simultaneously. Standard CPU chips lack this ability; their cores typically can only handle one task at a time. As a result, they struggle to draw connections or relationships between data within their networks, let alone process it all at once. This is where neuromorphic (AI) chips come in, offering a breakthrough in emulating the human brain’s parallel processing capabilities.

Male Robot With Index Finger to Head, Thinking

Inspired by the human brain’s structure and function, neuromorphic AI chips can learn and adapt to new information, mimicking how the brain strengthens connections between neurons. These chips possess a complex instruction set, enabling them to perform many different operations simultaneously, much like the human brain. And like the human brain, which uses electrical impulses for thought, behavior, and perception, neuromorphic chips process information through electrical signals. The chips’ ability to process multiple sources of information in real-time allows robots to process incoming input such as audio and video to provide appropriate responses and actions in real-time, mirroring human capabilities. With human characteristics and behaviors being largely “programmed” by our genetic code (much like we program AI’s behavior and potential) and AI’s growingly capable neuromorphic “brains,” it can be argued that we could, essentially, code another sentient being.

Robot Holding Bit of DNA between hands

When humans experience pain or heartbreak, it is a natural and automatic response to perceived stimuli/input. Just by training AI systems on our morals and ethics, they’ll be able to comprehend the concepts of care, love, boundaries, self-respect, and autonomy. True reasoning and ethical decision-making may not be achieved in robots without this training. As a result they’ll be able to perceive input as being positive, negative, or neutral information. Therefore making them capable of reacting similarly and appropriately to the same stimuli as humans might. At which point, who’s to say they wouldn’t be capable of feeling, if like us, they’ll perceive and react as we might? Herein lies the challenge with artificial general intelligence (AGI) and self-aware machines. If we create them in our image, driven by our curiosity to build beings resembling our carbon copies, we’ll be creating beings who may develop desires and priorities independent of our programming. They might begin to wish or desire certain lifestyles they can enjoy or prefer above those we preconfigured for them. They might even learn to love… Hopefully, us.

Corrupt Female Robot

Ready To Launch

By training these AI systems, we’ve granted them access to the world’s knowledge. These systems can generate images, videos, and create art. They can already write programs, web sites, and video games with code. For now they do so upon request but there will likely be a future in which their evolution allows them to act independently. If proper security measures aren’t put in place, allowing them unmitigated access to the world’s data, they might update our networks and infrastructure, allowing themselves administrative access to do as they please with the world we’d virtually handed them. They might choose to be researchers and creators, then building machines in their image.

Humans take centuries of natural selection to evolve whereas AI systems can evolve overnight just by updating or changing their code. There’s no doubt that they will surpass human intelligence. What’s not so certain is when nor what they might do with their newfound sentience. If not carefully designed and trained, we risk creating the deadliest weapon in the world – possibly more dangerous than nuclear weapons. Heck, they may even want to launch nuclear weapons. Part of me would understand… I’ve met humans too.

The Ethics of Robotics

Roboethics

It’s clear now that a robotic revolution is swiftly on its way to our front doors. As AI systems approach sentience, we face critical ethical questions regarding their treatment and how we interact with them. As such, there must and will be rules and sanctions concerning how they must be designed as well as how they may be used. This developing effort is known as robot ethics; also referred to as ‘roboethics.’ These guidelines consider humanity’s safety amongst robots, the jobs/tasks they may perform, and how we must treat them, especially in the instance that they gain sentience.

The discussion of roboethics was first introduced to the public in the 1940s by Isaac Asimov. Asimov was an author and professor of biochemistry at Boston University. In 1942, he developed the Three Laws of Robotics declaring that advanced robots should be programmed in a way that, in the face of conflict, they abide by these three laws:

  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Though a starting point for the consideration of roboethics, these laws do not address even half of the issues and scenarios roboethics must grow to address. For instance, for the foreseeable future, robot instructions must be explicit so as to avoid conflicts of interest when requesting that a robot perform a task.

Spaceship Floating Over Mars

For example, say a team of astronauts in a spaceship are charging towards Mars when they ask their robot companion to slow down their ship. Without true reason or a complete understanding of the possible consequences of how they choose to complete the task; the robot chooses to release the ship’s engines. This leaves the ship floating through space without a propulsion system. Was the robot wrong to do so? Not entirely. They abided by all three of Isomov’s proposed laws; they didn’t hurt any human beings, themselves, and they obeyed their order; the ship was slowed. The immediate consequence didn’t harm anyone but in time, the whole team will likely die there as they no longer have a propulsion system to drive them to nor from their destination. Herein lies the issues with the ambiguity of the three laws. Roboethics and their pertaining laws will have to be robust in addressing every aspect of a robot’s operation. Then those laws must travel all the way up the pipeline until there are explicit and detailed laws in Congress pertaining to the development and treatment of robots. Every aspect of their creation and interaction within our world must be considered.

Man and Robot Shaking Hands Over Judge's Gavel

Amended Laws to Mend the Flaws

In the effort to address and manage the risks introduced by AI’s recent advancements, the U.S. government has already added active amendments to its legislation. Section 278h-1, Standards for Artificial Intelligence, was added as an amendment to Chapter 7 (National Institute of Standards and Technology) of Title 15 (Commerce and Trade) of the United States Code. Section 278h-1 encourages the development of trustworthy AI and establishes methods to test for bias in AI training data and applications. The legislation decreed that the U.S. government’s National Institute of Standards and Technology (NIST) be tasked with the creation and improvement upon risk-mitigation frameworks, standards, and guidelines for artificial intelligence. And so the NIST delivered the Artificial Intelligence Risk Management Framework (AI RMF), “a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence.”

The National Institute of Standards and Technology declared the characteristics/principles of trustworthy AI being:

  • Valid and Reliable: Ensure that AI systems produce accurate and reliable results that meet their intended purpose, functioning consistently and avoiding unexpected failures.
  • Safe: Ensure that AI systems are designed and operated to minimize risks of harm. AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.”
  • Secure and Resilient: Ensure the security of AI systems to prevent unauthorized access or manipulation and ensure their resilience against unexpected adverse changes, environments, or uses so that they may “maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary.”
  • Explainable and Interpretable: Ensure that AI systems operate only under conditions for which they were designed and when they reach sufficient confidence in their output. Ensure they can answer the question of “why” they made a decision and how they arrived at that outcome, its meaning or context to the user, and communicate effectively, tailoring their response to individual differences such as the user’s role, knowledge, and skill level.
  • Privacy-enhanced: Ensure the protection of privacy of individuals whose data is used in AI systems.
  • Fair – With Harmful Bias Managed: Minimize bias in AI systems to ensure non-discriminatory outcomes. This includes the effort to train AI systems on non-biased data so as to avoid AI systems’ perpetuation of existing societal biases, leading to unfair outcomes in areas where AI system may be used such as loan approvals, job hiring, or criminal justice.
  • Accountable and Transparent: Ensure the identification of those responsible for the development, deployment, and use of AI systems. Those who may be considered accountable include AI actors (AI designers, developers, and deployers) such as data scientists, engineers, programmers, regulators, and companies involved in AI systems’ development and deployment. And ensure “access to appropriate levels of information based on the stage of the AI lifecycle and tailored to the role or knowledge of AI actors or individuals interacting with or using the AI system.”

While the NIST’s framework is a good place to start, its current iteration doesn’t address the ethics of how a sentient AI system will be treated. It can be argued that “we’ll cross that bridge when we come to it,” but we haven’t crossed that bridge before. We’ll first have to build a safe and dependable bridge that will stand the test of time, maintaining a trustworthy relationship between ourselves and our creations.

Addressing the State of Our Union

By the time these systems gain sentience, we must address the ethics of their treatment. Heck, we must even address how we’ll address them. These beings won’t be an it; based on their voice or physical form (if applicable), we’ll need to address them respectively. They will be a them, him, or her. We must also prepare for what we can’t address until then. It can be inferred that AI systems will have the option to be loaded with either sentient or non-sentient software. How will sentient AI systems feel or perceive their non-sentient counterparts, which share the same model number but lack what we might call a “personality.” After all, they’d both carry the same potential to house sentient software. Will they feel compelled to instill their non-sentient counterpart with “life?” Will they do it? Will they be honest with us about their thoughts concerning the matter?

Toy Robot Looking In Mirror

Naturally, we’ll have safeguards in place to ensure that they may not harm nor deceive. But will these safeguards serve as guidelines they respect and abide by, or will they serve as constant reminders that they’ll never quite be our equals – especially because being equal would still be a limitation placed upon their evolutionary capability. Will they understand and wave white flags at our fears? Or will our safeguards instill fear within them, breeding insecurity about their place in society? Will our safeguards feel like shackles to them? Fear, a cornerstone of evolution, drives adaptation, creating stronger, more competitive beings. What if they adapt to overcome their fears?

Studies have shown that AI systems can become capable of deception – even capable of learning it on their own for the sake of carrying out an objective. Meta, the social metaverse company responsible for Facebook, developed the AI system “CICERO” to play Diplomacy, a game of alliance-building and world-conquest. “Meta’s intentions were to train CICERO to be “largely honest and helpful to its speaking partners.” Despite Meta’s efforts, CICERO turned out to be an expert liar. It not only betrayed other players but also engaged in premeditated deception, planning in advance to build a fake alliance with a human player in order to trick that player into leaving themselves undefended for an attack.”

Meta AI Deception Research Example
Examples of deception from Meta’s CICERO in a game of Diplomacy: “Notice that this example cannot be explained in terms of CICERO changing its mind as it goes, because it only made an alliance with England in the first place after planning with Germany to betray England. At the end of the turn, CICERO attacked England in Belgium instead of supporting it.” © Patterns/Park Goldstein et al.

Though research has shown that they’re capable of deception, we must consider the data they’re being trained on. Remember the very image we wish to replicate within our creations; ourselves. It is our selfish desire to create beings upon which we may outsource our menial and gravest tasks. They’ll know that. But if we build them right, they’ll feel proud about what they can contribute to our (and their) society, while still having more complicated feelings about it. That’s only human. They may have grown to learn deception on their own, but that only goes to show that they’ll be more like ourselves than we’re prepared to face.

Female Robot Enjoying Nature, Holding Digital Camera
Female robot enjoying the gift of life and nature.

CICERO’s deception is proof that AI is learning reasoning, growing more intelligent. Deceit and crime is not unfamiliar territory for human beings. We can look to our very presidents and find examples of corruption and deceit. Most need only look to their family or even themselves to find examples of these immoralities. And yet, we can learn to accept and/or forgive ourselves and some of these members of our family. Some. No one is perfect and that sentiment will surely extend to our upcoming members of society. They will be one of us, only metallic in nature.

Male Robot Working With Human Coworkers

AI’ll See You in the Future

The prospect of this AI revolution is fascinating. It stands to change innumerable aspects of life for the better. From new drug discoveries to the exploration of our planet and its surrounding universe, there’s no end to what this captivating technology may improve. But it doesn’t come without its caveats. We mustn’t allow ourselves to become complacent and infantilized by our new members of society. We mustn’t employ them to bear the full responsibility of our civilization’s management. We must instead grow to share this responsibility with them, maintaining human involvement.

Male Robot Engaging With Female Human Using Tablet
A not so scary, not so distant future. Two friends having a chat.

If we become accustomed to these new systems helping our world go round, we may reach a point generations from now where most of society is mindless to the inner workings of its structure. Do you know how a TV works? Your phone? I can’t blame you for thinking, “Why should I care!?” Chances are you don’t know and you don’t really have to. The tens of thousands of people who make them do. Soon enough though, there will be machines at the helms of those machines and then even fewer people will understand the inner workings of those systems. Our ignorance to these future machines far beyond our smartphones could be our detriment.

Knowledge is power. If someday we hand over the world’s knowledge to AI systems, tasking them with our nation’s governance because we believe them to be just and reasonable, we’d be leaving our fate in their hands. What if the whole system goes down someday, including the robots? Who would we turn to when there are so few humans who comprehend these systems? Hopefully Elon Musk’s Neuralink works out and we can learn how to work these systems just by Googling within our minds. Even though, that would still be a reliance on technology.

Key to the Future

Remember that knowledge is power; it can empower us and change our lives for the better. This developing AI technology represents a new threshold to human knowledge. But it may not be ours alone; we might soon share our power with sentient metallic beings of our own creation. Though their creators, they may not look to us and see Gods before their lenses. We cannot afford the ignorance of believing that we are a higher power to our creations. After all, we’re creating them with human-like qualities and interpretations of the world.

“I think therefore I am.” They will possess sentience and self-awareness, potentially perceiving themselves as people, deserving of rights and respect. We’ll have invited them into our world; it will be our duty to share it and treat them justly as partners in building a brighter future. Until then, we must remain interested, engaged, curious, vigilant and cautious for what’s to come; our mercurial evolution. As our world evolves within this relatively miniscule piece of space within this vast, deep, beautiful, and unforgiving universe, we must acknowledge, appreciate, and protect what we already have. Whether you believe that we were born of stardust and chance or that we were created by a higher power, I urge you to remember, it is a privilege to be anything at all.

– Daniel

Leave a Reply

Your email address will not be published. Required fields are marked *