Advertising & MarketingAgriculture & Biological SystemsAI & RoboticsAR/VR/XRAviation & AerospaceBig Data Internet of ThingsCrypto Finance & BlockchainCybersecurityDigital DefenceDigital Healthtech & HealthcareDigital JournalismDigital MarketingDigital PharmacyeCommerce & OmnichannelEducation & E-leadershipEntertainment & GamingEnvironment EngineeringEventsExpertsFacts & ForecastsFintech & Digital BankingGreen & Smart EnergyHardware & SoftwareHighlights & AnalysisIndustrial & Technology ParksMobile DevicesProduct reviewsSmart Cities & BuildingsSmart TVSocial Media MarketingStart UpSupply Chain & FulfillmentTech NewsTelecommunications & 5GVideo Marketing

The History of Robotics, Artificial Intelligence, and Automation

Karel Capek and other science fiction writers predicted that the future will be automated. As exciting as modern technologies are, automation continues to develop, providing us with innovative solutions to unravel future technologies. We experience the wonder of robots and artificial intelligence on a daily basis; therefore, we should be grateful to the inventors of robots, the development of artificial intelligence, and learn from the crisis of automation at the beginning of the twenty-first century.

We have become so dependent upon automation that we cannot imagine living in a world without automated technologies. Our lives are regulated by ‘smart phones’, social media, video games, emails, chatbots, ‘virtual reality’, ‘augmented reality’, and automated technology such as automatic doors, AI regulated homes, cars, and gyms (to name a few), Robotic Process Automation (RPA), Business Process Management (BPM), Business Process Automation (BPA) – the list goes on forever! Automation began with industrial machinery and the industrial revolution (roughly between 1790 and 1840); like now (with the looming fourth industrial revolution), people dreaded what impact automation would have on their jobs. In 1837, with no computer yet to drive AI, Charles Babbage began to create a prototype machine known as the ‘Analytical Engine’. In those days, it was the only device worthy of the name ‘computer’. His friend, Augusta Ada Lovelace, developed the first ever computer program to run on the machine, but Babbage died before his prototype could be completed. Charles Babbage’s computer would have had most of the basic elements of the present-day computers. Augusta Lovelace became known as the first computer programmer and first female technical visionary.

In the early 1900s, the term robot was mentioned by a Czechoslovakian writer, Karel Capek. His 1920 science fiction novel ‘Rossum’s Universal Robots’ was the first account of the creation of robots and the world being colonised by robots. Since Capek’s imaginary creation of robots, a physical robot named ELEKTRO was assembled in 1939 for an exhibition at a World Fair. Although the robot could only blow up balloons, smoke cigarettes, and walk it was a major breakthrough in the history of automation. Early in the 1940s (after the creation of ELEKTRO), Isaac Asimov (the Massachusetts professor of biochemistry at Boston University and American science fiction writer) thought it wise to generate three laws of robotics to use in his science fiction novels. The rules instructed how robots should act towards humans; they were adopted by other writers who used them in their literature too. The first rule instructs a robot not to injure a human being or (through inaction) allow a human being to be harmed. The second rule advises a robot to obey the orders of a human being (except when such orders are in conflict with the First Law). The third rule implies that a robot should protect its own existence (except when such protection is in conflict with the First or Second Law). William Grey Walter (the American-born British neurophysiologist and Director of the Burden Neurological Institute in Bristol, England) invented cybernetic automata and in 1948 and 1949 he created the first autonomous robots – Elmer and Elsie. They were installed with bump sensors and moved in response to light stimuli; this is how they manoeuvred around obstructions without human invention.

In 1953, William described in his book ‘The Living Brain’ how he performed experiments at the Burden Institute with EEG electrodes and an electronic stroboscope. The aim of his research was to analyse the effects of flashing stroboscopic light on the electrical activity of the human brain. His pioneering EEG studies and his role in cybernetics contributed significantly to the transformation of sciences in the early Cold War era. Meanwhile, in 1950, Alan Turing invented a technique known as the ‘Turing Test’ (he initially referred to it as the imitation game). It tests a machine’s ability to determine intelligence behaviour equivalent to or impossible to differentiate from human intelligence; in other words, to establish the ‘intelligence’ of a machine and its ability to ‘think’ – to date, no AI has passed the Turing Test.

In July 1956, the Dartmouth University held a Summer Research Project on Artificial Intelligence. It was essentially a six week brainstorming workshop to serve as motivation for research and development projects in robotics, artificial intelligence, and automation. In 1966, Joseph Weizenbaum (MIT professor and computer scientist) created a chatbot which he named ELIZA and designed a program that used pattern similarity and replacement to replicate conversations; the program searched for specific keywords in typed comments to transform into sentences. The 1968 mobile robot ‘Shakey’ was the first general-purpose mobile robot that could generate its own actions. While other robots required individual step-by-step instructions to complete a large task, ‘Shakey’ could handle a task by analysing and constructing the task into basic segments. ‘Shakey’ was replaced by Flakey, an autonomous mobile research robot that was created by SRI International’s Artificial Intelligence Centre (the American non-profit research institute that supports government and industry). SRI collaborates across scientific and technical disciplines to create true originality and value; the institute also channels the discovery and design of innovative industries, products, and technologies. In 1979, an industrial robot known as the Selective Compliance Assembly Robot Arm (SCARA) was launched for commercial assembly lines as a handling unit (for special purposes such as pick-and-place) or for assembly operations (where high accuracy and speed is necessary.) Manufactured in 1984 by RB Robot Corporation in Colorado, RB5X is a personal robot with a programmable RS-232 communications interface. RB5X is a cylinder-shaped, transparent robot with a dome-shaped top and an optional arm. In 1989, Sir Timothy John Berners-Lee (the English computer scientist) created hyperlinks and hypertext to formulate the World Wide Web. The 1990s emphasised the modification of AI from physical bots to digital programs. ‘Deep Blue’ is a chess-playing computer that was developed by IBM. Not only was it the first computer to win a chess game, but it also won a chess match against the world chess champion, Garry Kasparov. On NASA’s 30th anniversary of the robotic exploration of Mars, it was decided that the first wheeled vehicle and autonomous robotic system to rove the Red Planet would be called Sojourner (traveller). On the 4th of July 1997, the Mars Pathfinder (a robotic spacecraft) landed on Mars with the six-wheeled roving probe. Sojourner spent eighty three days exploring the terrain on Mars, taking photographs and making chemical and atmospheric analysis; using an Alpha Proton X-Ray Spectrometer (APSX) to inspect the elements inside rocks (which had more silica than the surrounding environments.

Automation had enormous success in a short period, but two decades before its fourth revolution it was faced with a major crisis which terrified the world. The anomaly was termed the ‘Y2K Bug’, the ‘Year 2000 Bug’, ‘the Millennium Bug’ or plainly the end of the twentieth century. Random access memory (RAM) is one of the most important components that determine a computer’s performance. The earliest computers had a mere fraction of the RAM of modern computers and very little storage space. To save space, an easy fix was to store year values as two digits (which is why the 1950s and 1960s were years that represented only two digits). Software was developed to treat all dates as part of the twentieth century. Programs were read from punched cards with a fixed width; 1966 was punched in as 66. Eventually, improvements in hardware supplied more RAM and faster processors. Punched cards were replaced by magnetic media such as hard drives and tapes on which to store countless data and programs. As software was updated and re-installed into systems, nobody considered updating the date format to four-digit years; new software continued reading dates as two-digit years. Automated machinery, industrial control systems, robotic production lines, and programmable logic controllers (PLCs) were all programmed to use two-digit years. PLCs are industrial computers that have been adjusted to manage manufacturing processes such as assembly lines, machines, robotic devices, or any activity that involves high reliability and ease of programming. Using two digits instead of four offers a quick solution for a shortage of storage space – not a resolution for year values! If the software was coded to read dates as two digits (in its present century) it cannot read dates in a different century. It would give false results once the following century arrives and incorrectly interpret the year 2000 as 1900; the two digit value would be recorded as 00. 2015 would be understood as 1915; its two digit value would be recorded as 15. This is completely insane data! On the 3lst of December 1999 (at the stroke of midnight) every computer as well as every device with a microprocessor and embedded software (that stored and processed dates as two digits) would be confronted with the Millennium Bug.

The problem was not only confined to desktops, mainframes, minicomputers, and networks; microprocessors were also running in aircrafts, communication satellites, factories, missile control systems, and power stations. Almost everything that is digitally functional, electronic, or automated is controlled by one or other code. Corporate, Government, and military systems would transmit data one hundred years back in time in the flash of one second! People envisioned self-launching nuclear missiles and planes falling from the sky while others predicted the end of time. On the 19th of October 1998, President Bill Clinton signed the Year 2000 Information and Readiness Disclosure Act. In his 1999 State of the Union speech, he emphasised ‘we need every state and local government, every business, large and small, to work with us to make sure that the Y2K computer bug will be remembered as the last headache of the 20th century, not the first crisis of the 21st century’. Since he signed the Readiness Act the previous year, people had already started preparing for the turn of the century. In scenes similar to the current pandemic, most households began to accumulate essential supplies and businesses started stockpiling. Companies and governments worldwide attempted to find fixes for the Y2K bug. In some cases, short-term fixes were implemented to gain a couple of decades before applying a real fix (the temporary fixes would hit the upper limits of their short-term patch-ups after one or two decades or less). One such short-term fix was when all parking meters in New York stopped accepting credit card payments on the 1st of January 2020 when the software running the meters reached the end of its patch-up fix – each and every parking meter had to be individually updated to Y2K compliant status. Developers, development partners, and software suppliers that applied Y2K compliant fixes had to endure financial strain due to their clients requesting legally-binding statements of compliance, insisting on statements verifying that the Y2K code was safe, and that should something happen on or after the 1st of January 2000 the onus of liability would be on the developers, development partners, and software suppliers to fix any Y2K-related problems at no charge to the clients. The total global cost of preparing for the millennium was estimated at an average of more or less six hundred billion dollars by Gartner Group, Connecticut. In New York, Cap Gemini’s statistics for maximum global dollars spent differed in excess of two hundred billion dollars from that of the Gartner Group. The United States alone spent more than one hundred billion dollars (this amount was the total cost of the federal government, airlines, banks, telecommunication firms, utility companies, and other corporate entities with several computers).

On New Year’s Eve 1999, John Koskinen (Chairman of the President’s Council on Year 2000 Conversion) embarked on a flight that would be airborne at midnight to express his faith in the expensive millennium-ready project. The South-African born author Peter de Jager (Canadian computer engineer who is best known for his 1993 article in the magazine ‘Computerworld’ as well as his Y2K awareness campaign) was also airborne at midnight on a different flight. When the new millennium arrived, both planes landed safely! Without preparing for the new millennium the world would have been in a disastrous situation. Despite the efforts to prevent the Y2K bug from affecting automated systems, there were less severe to minor incidences that occurred mainly in the United Kingdom, the United States, Australia, Denmark, Egypt, and Japan. United States spy satellites could not be detected for three days due to a defective patch to correct the Y2K bug. In New York, a man returning a tape to the video store received an invoice for 91,250 dollars for bringing the tape back hundred years later. In Australia, bus tickets were printed with the wrong dates and rejected by ticket scanning hardware. The first baby of the new millennium was born in Denmark – the baby’s age was recorded as hundred years. Egypt’s national newswire service discontinued, but was soon reinstated. Two nuclear power plants in Japan developed minor, non-threatening errors that were promptly addressed. Several months into the new millennium, a health representative in England noticed a statistical irregularity in children born with Down’s syndrome. After studying the statistics, it was obvious that the ages of one-hundred-and-fifty-four mothers were incorrectly recorded during January 2000. According to their true ages, they should have been placed in a high-risk group where they would have been offered medical tests for irregularities during pregnancy. Due to the incorrectly recorded ages, pregnancy risks went undetected; two pregnancies were terminated and four children were born with Down’s syndrome.

Show More

Lynette Barnard

I studied a Bachelor of Science degree with subjects such as Nursing, Psychology, Sociology, Anatomy, Microbiology and Physics while undergoing training as a student nurse at an academic hospital.
Back to top button