Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it."[2]
10th century BC
Yan Shi presented King Mu of Zhou with mechanical men which were capable of moving their bodies independently.[3]
Ctesibius invents a mechanical water clock with an alarm. This was the first example of a feedback mechanism.[citation needed]
1st century
Hero of Alexandria created mechanical men and other automatons.[8] He produced what may have been "the world's first practical programmable machine:"[9] an automatic theatre.
260
Porphyry wrote Isagogê which categorized knowledge and logic, including a drawing of what would later be called a "semantic net".[10]
The Banū Mūsā brothers created a programmable music automaton described in their Book of Ingenious Devices: a steam-driven flute controlled by a program represented by pins on a revolving cylinder.[12] This was "perhaps the first machine with a stored program".[9]
al-Khwarizmi wrote textbooks with precise step-by-step methods for arithmetic and algebra, used in Islam, India and Europe until the 16th century. The word "algorithm" is derived from his name.[13]
Ramon Llull, Mallorcan theologian, invents the Ars Magna, a tool for combining concepts mechanically based on an Arabic astrological tool, the Zairja. Llull described his machines as mechanical entities that could combine basic truth and facts to produce advanced knowledge. The method would be developed further by Gottfried Wilhelm Leibniz in the 17th century.[15]
~1500
Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.[16]
Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[20][21]
René Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[23]
1654
Blaise Pascal described how to find expected values in probability, in 1662 Antoine Arnauld published a formula to find the maximum expected value, and in 1663, Gerolamo Cardano's solution to the same problems is published 116 years after it was written. The theory of probability is further developed by Jacob Bernoulli and Pierre-Simon Laplace in the 18th century.[24] Probability theory would become central to AI and machine learning from the 1990s onward.
Leibniz developed a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. It assigned a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.[27]
1726
Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations" by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[28] The machine is a parody of Ars Magna, one of the inspirations of Gottfried Wilhelm Leibniz' mechanism.
Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk, which Kempelen claimed could defeat human players.[31] The Turk was later shown to be a hoax, involving a human chess player.
George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.[39]
1863
Samuel Butler suggested that Darwinianevolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[40]
20th century
AI history timeline image covering the most important events from 1900 to 2025
Kurt Gödel encoded mathematical statements and proofs as integers, and showed that there are true theorems that are unprovable by any consistent theorem-proving machine. Thus "he identified fundamental limits of algorithmic theorem proving, computing, and any type of computation-based AI,"[9] laying foundations of theoretical computer science and AI theory.
1935
Alonzo Church extended Gödel's proof and showed that the decision problem of computer science does not have a general solution.[47] He developed the Lambda calculus, which will eventually be fundamental to the theory of computer languages.
1936
Konrad Zuse filed his patent application for a program-controlled computer.[48]
Alan Turing produces "Intelligent Machinery" report, regarded as the first manifesto of Artificial Intelligence. It introduces many concepts including the logic-based approach to problem solving, that intellectual activity consists mainly of various kinds of search, and a discussion of machine learning in which he anticipates the Connectionism AI approach.[52]
John von Neumann (quoted by Edwin Thompson Jaynes) in response to a comment at a lecture that it was impossible for a machine (at least ones created by humans) to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church–Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.
Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur.[57] His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.[58][59]
Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's "Programs with Common Sense" (which proposed the Advice taker application as a primary research goal)[58]Oliver Selfridge's "Pandemonium", and Marvin Minsky's "Some Methods of Heuristic Programming and Artificial Intelligence".
James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level.
In Minds, Machines and Gödel, John Lucas[64] denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior.
Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests.
Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt.
1964
Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebraword problems correctly.
Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems.
Lotfi A. Zadeh at U.C. Berkeley publishes his first paper introducing fuzzy logic, "Fuzzy Sets" (Information and Control 8: 338–353).
J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language.
Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966
Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets.
Machine Intelligence[71] workshop at Edinburgh – the first of an influential annual series organized by Donald Michie and others.
Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning.
Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian minimum message length criterion, a mathematical realisation of Occam's razor.
Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge.
First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford.
Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. However, by the time the book came out, methods for training multilayer perceptrons by deep learning were already known (Alexey Ivakhnenko and Valentin Lapa, 1965; Shun'ichi Amari, 1967).[9] Significant progress in the field continued (see below).
McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".
Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge.
Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding.
Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks.
1971
Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English.
Work on the Boyer-Moore theorem prover started in Edinburgh.[75]
The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities.
1974
Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems.
1975
Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems.
Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan.
Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together.
The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal.
The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments.
1979
Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells".
Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge.
The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion (in part via luck).
Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance.
Late 1970s
Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.
Stevo Bozinovski and Charles Anderson carry out first concurrent programming (task parallelism) in neural network research. A program, "CAA Controller" written and executed by Bozinovski interacts with the program "Inverted Pendulum Dynamics" written and executed by Anderson, using VAX/VMS mailboxes as a way of inter-program communication. The CAA controller learns to balance the simulated inverted pendulum.[78][79][80]
The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments).
Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (cf. Doyle 1983).[82]
Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.[83]
Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.
Early 1990s
TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players.
1991
DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.[85]
1992
Carol Stoker and NASA Ames robotics team explore marine life in Antarctica with an undersea robot TelepresenceROV operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California.[86]
ISX corporation wins "DARPA contractor of the year"[87] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[88]
1994
Lotfi A. Zadeh at U.C. Berkeley creates "soft computing"[89] and builds a world network of research with a fusion of neural science and neural net systems, fuzzy set theory and fuzzy systems, evolutionary algorithms, genetic programming, and chaos theory and chaotic systems ("Fuzzy Logic, Neural Networks, and Soft Computing", Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77–84).
With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever.
Cindy Mason at NASA organizes the First International IJCAI Workshop on AI and the Environment.[91]
"No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). Throttle and brakes were controlled by a human driver.[92][93]
One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes.
1996
Steve Grand, roboticist and computer scientist, develops and releases Creatures, a popular simulation of artificial life-forms with simulated biochemistry, neurology with learning algorithms and inheritable digital DNA.
Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous.
Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings.
Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge.[105][106]
Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPA's Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in eight tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.[111]
NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images.[112]
2015
Two techniques were developed concurrently to train very deep networks: highway network,[113] and the residual neural network (ResNet).[114] They allowed over 1000-layers-deep networks to be trained.
In July 2015, an open letter to ban development and use of autonomous weapons was signed by Hawking, Musk, Wozniak and 3,000 researchers in AI and robotics.[117]
GoogleDeepMind's AlphaGo (version: Lee)[118] defeated Lee Sedol 4–1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016.[120]
Deepstack[121] is the first published algorithm to beat human players in imperfect information games, as shown with statistical significance on heads-up no-limit poker. Soon after, the poker AI Libratus by different research group individually defeated each of its four-human opponents—among the best players in the world—at an exceptionally high aggregated winrate, over a statistically significant sample.[122] In contrast to Chess and Go, Poker is an imperfect information game.[123]
Google Lens image analysis and comparison tool released in October 2017, associates millions of landscapes, artworks, products and species to their text description.
Google DeepMind revealed that AlphaGo Zero—an improved version of AlphaGo—displayed significant performance gains while using far fewer tensor processing units (as compared to AlphaGo Lee; it used same amount of TPU's as AlphaGo Master).[118] Unlike previous versions, which learned the game by observing millions of human moves, AlphaGo Zero learned by playing only against itself. The system then defeated AlphaGo Lee 100 games to zero, and defeated AlphaGo Master 89 to 11.[118] Although unsupervised learning is a step forward, much has yet to be learned about general intelligence.[130] AlphaZero masters chess in four hours, defeating the best chess engine, StockFish 8. AlphaZero won 28 out of 100 games, and the remaining 72 games ended in a draw.
Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.[131]
The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, with the aim of staving off a brain drain of talent, along the lines of CERN after World War II.[132]
Announcement of Google Duplex, a service to allow an AI assistant to book appointments over the phone. The Los Angeles Times judges the AI's voice to be a "nearly flawless" imitation of human-sounding speech.[133]
2019
DeepMind's AlphaStar reaches Grandmaster level at StarCraft II, outperforming 99.8 percent of human players.[134]
2020s
This article needs to be updated. Please help update this article to reflect recent events or newly available information.(September 2023)
The number of the public's Google searches for the term "AI" began to accelerate in 2022.
Date
Development
2020
In February 2020, Microsoft introduces its Turing Natural Language Generation (T-NLG), which is the "largest language model ever published at 17 billion parameters".[135]
OpenAI introduces GPT-3, a state-of-the-art autoregressive language model that uses deep learning to produce a variety of computer codes, poetry and other language tasks exceptionally similar, and almost indistinguishable from those written by humans. Its capacity was ten times greater than that of the T-NLG. It was introduced in May 2020,[137] and was in beta testing in June 2020.
2022
ChatGPT, an AI chatbot developed by OpenAI, debuts in November 2022. It is initially built on top of the GPT-3.5large language model. While it gains considerable praise for the breadth of its knowledge base, deductive abilities, and the human-like fluidity of its natural language responses,[138][139] it also garners criticism for, among other things, its tendency to "hallucinate",[140][141] a phenomenon in which an AI responds with factually incorrect answers with high confidence. The release triggers widespread public discussion on artificial intelligence and its potential impact on society.[142][143]
A November 2022 class action lawsuit against Microsoft, GitHub and OpenAI alleges that GitHub Copilot, an AI-powered code editing tool trained on public GitHub repositories, violates the copyrights of the repositories' authors, noting that the tool is able to generate source code which matches its training data verbatim, without providing attribution.[144]
2023
By January 2023, ChatGPT has more than 100 million users, making it the fastest-growing consumer application to date.[145]
On January 16, 2023, three artists, Sarah Andersen, Kelly McKernan, and Karla Ortiz, file a class-action copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists.[146]
On January 17, 2023, Stability AI is sued in London by Getty Images for using its images in their training data without purchasing a license.[147][148]
Getty files another suit against Stability AI in a US district court in Delaware on February 6, 2023. In the suit, Getty again alleges copyright infringement for the use of its images in the training of Stable Diffusion, and further argues that the model infringes Getty's trademark by generating images with Getty's watermark.[149]
OpenAI's GPT-4 model is released in March 2023 and is regarded as an impressive improvement over GPT-3.5, with the caveat that GPT-4 retains many of the same problems of the earlier iteration.[150] Unlike previous iterations, GPT-4 is multimodal, allowing image input as well as text. GPT-4 is integrated into ChatGPT as a subscriber service. OpenAI claims that in their own testing the model received a score of 1410 on the SAT (94th percentile),[151] 163 on the LSAT (88th percentile), and 298 on the Uniform Bar Exam (90th percentile).[152]
On March 7, 2023, Nature Biomedical Engineering writes that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."[153]
In response to ChatGPT, Google releases in a limited capacity its chatbot Google Bard, based on the LaMDA and PaLM large language models, in March 2023.[154][155]
On March 29, 2023, a petition of over 1,000 signatures is signed by Elon Musk, Steve Wozniak and other tech leaders, calling for a 6-month halt to what the petition refers to as "an out-of-control race" producing AI systems that its creators can not "understand, predict, or reliably control".[156][157]
In May 2023, Google makes an announcement regarding Bard's transition from LaMDA to PaLM2, a significantly more advanced language model.[158]
In the last week of May 2023, a Statement on AI Risk is signed by Geoffrey Hinton, Sam Altman, Bill Gates, and many other prominent AI researchers and tech leaders with the following succinct message: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[159][160]
On July 9, 2023, Sarah Silverman files a class action lawsuit against Meta and OpenAI for copyright infringement for training their large language models on millions of authors' copyrighted works without permission.[161]
In August, 2023, the New York Times, CNN, Reuters, the Chicago Tribune, Australian Broadcasting Corporation (ABC) and other news companies block OpenAI's GPTBot web crawler from accessing their content, while the New York Times also updates its terms of service to disallow the use of its content in large language models.[162]
In October 2023, AlpineGate AI Technologies Inc. CEO John Godel announced the launch of their AI Suite, AGImageAI, along with their proprietary GPT model, AlbertAGPT. [167]
In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[170] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[171][172]
On October 9, Co-founder and CEO of Google DeepMind and Isomorphic Labs Sir Demis Hassabis, and Google DeepMind Director Dr. John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developing AlphaFold, a groundbreaking AI system that predicts the 3D structure of proteins from their amino acid sequences.
2025
On February 6, Mistral AI releases Le Chat, an AI assistant able to answer up to 1,000 words per second.[173]
^Richard McKeon, ed. (1941). The Organon. Random House with Oxford University Press.
^Giles, Timothy (2016). "Aristotle Writing Science: An Application of His Theory". Journal of Technical Writing and Communication. 46: 83–104. doi:10.1177/0047281615600633. S2CID170906960.
^Amari, Shun-Ichi (1972). "Learning patterns and pattern sequences by self-organizing nets of threshold elements". IEEE Transactions. C (21): 1197–1206.
^Church, A. (1936). "An unsolvable problem of elementary number theory (first presented on 19 April 1935 to the American Mathematical Society)". American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. JSTOR2371045.
^K. Zuse (1936). Verfahren zur selbsttätigen Durchführung von Rechnungen mit Hilfe von Rechenmaschinen. Patent application Z 23 139 / GMD Nr. 005/021, 1936.
^Copeland, J (Ed.) (2004). The Essential Turing: the ideas that gave birth to the computer age. Oxford: Clarendon Press. ISBN0-19-825079-7.{{cite book}}: CS1 maint: publisher location (link)
^Amari, Shun'ichi (1967). "A theory of adaptive pattern classifier". IEEE Transactions. EC (16): 279–307.
^Grosz, Barbara J.; Hajicova, Eva; Joshi, Aravind (2015). "Jane J. Robinson". Computational Linguistics. 41 (4): 723–726. doi:10.1162/COLI_a_00235. Retrieved 23 January 2024.
^Linnainmaa, Seppo (1970). Algoritmin kumulatiivinen pyöristysvirhe yksittäisten pyöristysvirheiden Taylor-kehitelmänä [The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors] (PDF) (Thesis) (in Finnish). pp. 6–7.
^Stevo Bozinovski and Ante Fulgosi (1976). "The influence of pattern similarity and transfer learning upon training of a base perceptron" (original in Croatian) Proceedings of Symposium Informatica 3-121-5, Bled.
^Stevo Bozinovski (2020) "Reminder of the first paper on transfer learning in neural networks, 1976". Informatica 44: 291–302.
^Bozinovski, Stevo (1981) "Inverted pendulum control program" ANW Memo, Adaptive Networks Group, Computer and Information Science Department, University of Massachusetts at Amherst, December 10, 1981
^Bozinovski, Stevo and Anderson, Charles (1983) "Associative memory as controller of an unstable system: Simulation of a learning control" Proc. IEEE Mediterranean Electrotechnical Conference, C5.11., Athens, Greece"
^Bozinovski, Stevo (1995) "Adaptive parallel distributed processing: Neural and genetic agents: Neuro-genetic agents and a structural theory of self-reinforcement learning systems" CMPSCI Technical Report 95-107, Computer Science Department, University of Massachusetts at Amherst
^Harry Henderson (2007). "Chronology". Artificial Intelligence: Mirrors for the Mind. NY: Infobase Publishing. ISBN978-1-60413-059-1. Archived from the original on 15 March 2023. Retrieved 11 April 2015.
^Cook, Donald A.; Sterling, John W. (June 1989). "EmeraldInsight". Planning Review. 17 (6): 22–27. doi:10.1108/eb054275. Archived from the original on 2 February 2014. Retrieved 15 March 2015.
^Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Juergen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX10.1.1.75.6306.
^Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World, Michael Joseph, ISBN978-0-7181-2401-4
Kaplan, Andreas; Haenlein, Michael (2018), "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence", Business Horizons, 62: 15–25, doi:10.1016/j.bushor.2018.08.004, S2CID158433736
Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley
Levitt, Gerald M. (2000), The Turk, Chess Automaton, Jefferson, N.C.: McFarland, ISBN978-0-7864-0778-1
Lighthill, Professor Sir James (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council
Newell, Allen; Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, Edward; Feldman, Julian (eds.), Computers and Thought, New York: McGraw-Hill
Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think, New York: Macmillan/SAMS, ISBN978-0-9885937-1-8
Pearl, J. (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, California: Morgan Kaufmann