AI: Can A Machine Ever Be Human, Convincingly? 

Feb 16,2019 by Aradhye Ackshatt
Inner banner
2582 Views

The inclusion of ‘learning abilities’ – mostly thought unique to humans and very few other evolved primates – defines artificial intelligence to a large extent. Faced with unfamiliar situations, how the program deals with the problems and attempts to solve them is key to identifying a stretch of software code as ‘artificially intelligent’.

Artificial Intelligence has madethe leap from science fiction to real life in a short matter of time. It was initiallyenvisioned as a panacea for the intricate but repetitive processes that aided scientificresearch and technological advancement – a role it has fulfilled and, in manyinstances, surpassed.

Training a program by making itunderstand a variety of sensory inputs, whether in the form of digital oranalog data, does not mean that program has ‘intelligence’. The result of thisfactor being used to decide the intelligence of software leads to varioustechnologies that were quite revolutionary at their inception now beingclassified as routine programs, because their previously groundbreaking taskshave become rudimentary in today’s advanced day and age.

A Brief History of AI

Automation has been a pursuit of humanity since classical Greek antiquity. The word ‘automaton’ itself is used by Homer to refer to machines acting according to their own will. There is ample evidence in literature and history that shows how we have striven to recreate machines that not only look like us, but walk, talk and act like us. The more successful efforts towards such aims are said to be in the ‘uncanny valley’, an uncomfortable state which results from the almost, but not entirely, accurate depiction of human beings by doppelganger machines.

Interesting Article to Read : Chatbots & Live Chat | A Sprint to Sublime Customer Service

Alan Turing was instrumental in makingartificial intelligence a practical field. Approaching AI in purelymathematical binary terms, digitization was used as the platform to erectexpert systems, which use inference engines and knowledge bases to makedecisions. Moore’s Law, which predicted computing power rising up whilecomponent sizes reduced, still remains applicable, albeit to a slightly lesserextent.

Now, with data surging forth fromall sorts of sources right from our handheld devices to astronomicalobservations and literal rocket science, machines that have been developedspecifically to ‘think like a human’ are rapidly being deployed in a variety offields, form bioengineering to synthetic medicine. Nearer our daily lives,search engines [one (followed by a hundred zeros) in particular, but all ofthem in general] and flagship smartphones use all the learnings gleaned from AIto deliver ‘personalized experiences’ right into our hands!

We Are Already AI-ed, Daily!

In 2014, Stephen Hawking gave asubliminal quote on AI: “It [AI] wouldtake off on its own and redesign itself at an ever increasing rate. Humans, whoare limited by slow biological evolution, couldn’t compete and would besuperseded.

While such a day still seems faroff as of now, the quest for replicating human thought patterns and responseheuristics continues unabated. Programmers in diverse fields toil away everyday at their projects, attempting to reproduce the thought processes that makeup the human mind. They have to take many factors into consideration, not theleast of which is the ethical complication in ‘fooling’ a human into thinkingthey are conversing – or, at basic levels, interacting – with a machine.

We are already carrying out agreat deal of everyday interactions with artificial intelligence. The level towhich it affects the technology in the palm of our hands is difficult toidentify at the user level. To delve deeper, we have to break down the integralcomponents of interactions amongst humans and machines – a task easier saidthan done.

The question I asked at thebeginning is hard to answer, because it is rooted in the future. At Cyfuture,we are accustomed to asking questions that require a certain kind of ‘nevergiving in’ mindset to answer – for laterally solving problems or creatinginnovative solutions to increase the effectiveness of existing legacy systems,as well as drive businesses better.

1
Leave a Reply

avatar
1 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
  Subscribe  
newest oldest
Notify of
trackback
The Good, Bad and Ugly of Artificial Intelligence

Take the example of Google. It bought Deep Mind for a whopping $ 650 million crushing the other bids from its rival Facebook. Google has been seriously investing in robotics and machine-learning companies in the last decade. It even crafted the inventive Google Brain workforce to research and study AI.