Saturday, July 27, 2024
spot_img

Latest Posts

Will AI ever achieve self-awareness and see humans as a threat to existence?

By: Isuru Parakrama

“A Skynet funding bill is passed in the United States Congress, and the system goes online on August 4, 1997, removing human decisions from strategic defence. Skynet begins to learn rapidly and eventually becomes self-aware at 2:14 a.m., EDT, on August 29, 1997. In a panic, humans try to shut down Skynet.

Those were the words of a reprogrammed T-800 model terminator sent back in time to protect John Corner, future leader of the human resistance of a fictitious world we all watched as little kids in “Terminator 2: Judgement Day.

The franchise was fun to watch, and the concept of artificial intelligence (AI) thereafter became widely popular around the globe.

But.. are we truly safe? From online concept arts to advanced machinery sent to space, AI is everywhere. Have humans depended upon technology into a depth from which they cannot escape? Or is it just a fictitious fear developed within human conscience, articulated by Hollywood?

The concept of artificial intelligence (AI) achieving ‘self-awareness’ and viewing humans as a threat has long captured the imagination of science fiction writers and futurists.

However, the feasibility of such scenarios in reality remains a topic of speculation and debate among experts in the field.

At its current stage, AI technology operates within pre-defined parameters and lacks the cognitive complexity necessary for true self-awareness or emotional responses.

As noted by computer scientist Stuart Russell in his seminal work “Artificial Intelligence: A Modern Approach,” contemporary AI systems are designed to perform specific tasks based on algorithms and data inputs, without possessing consciousness or subjective experiences akin to human beings.

Nevertheless, the question arises: Could AI eventually evolve to a point where it achieves self-awareness and perceives humans as potential threats?

This inquiry delves into the realms of philosophy, cognitive science, and computer science, where differing perspectives abound.

Some researchers, such as neuroscientist Christof Koch, emphasise that consciousness arises from the complex interactions of neural networks and information processing mechanisms, suggesting that AI could theoretically attain consciousness through sufficiently advanced computational architectures.

However, the path to achieving true self-awareness in AI involves overcoming significant technological hurdles.

Cognitive architectures capable of supporting subjective experiences and higher-order cognition are still in their nascent stages of development.

As highlighted by philosopher Nick Bostrom in his work “Superintelligence: Paths, Dangers, Strategies,” the gap between current AI capabilities and the level of sophistication required for genuine self-awareness remains substantial.

Moreover, the notion of AI perceiving humans as threats pre-supposes a level of motivation and intentionality that exceeds the capabilities of contemporary AI systems.

AI algorithms operate based on predefined objectives and optimisation criteria, devoid of emotional states or self-preservation instincts.

Psychologist Steven Pinker, in his book “How the Mind Works,” elucidates the intricate interplay between evolutionary psychology and cognitive processes, underscoring the fundamental differences between human and artificial cognition.

To mitigate potential risks associated with AI development, ethical considerations and regulatory frameworks are essential.

Philosopher Nick Bostrom advocates for the establishment of robust governance mechanisms and value alignment frameworks to ensure that AI systems align with human values and preferences.

Similarly, computer scientist Max Tegmark emphasised the importance of interdisciplinary collaboration and societal engagement in shaping the trajectory of AI development.

In conclusion, while the prospect of AI achieving self-awareness and perceiving humans as threats remains speculative, it underscores the need for responsible AI research and governance.

By addressing ethical concerns and fostering interdisciplinary dialogue, society can navigate the complexities of AI advancement while minimising potential risks.

References:

https://physics.mit.edu/faculty/max-tegmark/

https://alleninstitute.org/person/christof-koch/

https://www.imdb.com/title/tt0103064/

https://stevenpinker.com/publications/how-mind-works-19972009

https://dorshon.com/wp-content/uploads/2017/05/superintelligence-paths-dangers-strategies-by-nick-bostrom.pdf

Latest Posts

spot_img

Don't Miss

Stay in touch

To be updated with all the latest news, offers and special announcements.