Glossary

QUICK SEARCH

Search engine for terminology relating to ai.

http://foldoc.org/contents/A.html

Huge Glossary      https://www.stottlerhenke.com/features/glossary/

Deep Learning/Machine Learning/Neural Networks

https://www.youtube.com/watch?v=bHvf7Tagt18

Autonomous Agents

An autonomous agent is an intelligent agent operating on an owner’s behalf but without any interference of that ownership entity.

An autonomous agent performs functions within an environment to achieve specific goals, without being directed to do so. Some computer programs act as autonomous agents, as do advanced robotics, examples of artificial life, and computer viruses. Numerous researchers perform work in this field to develop a deeper understanding of agents and their potential capabilities as well as applications. Trade journals and annual conferences provide a medium of exchange to allow people to share information and research outcomes.

Differentiating between an autonomous agent and computer programs can be challenging. In some cases, there is overlap and the lines of the definition may blur. Generally, it is necessary for an agent to be able to use reasoning to interact with a system. This includes the ability to sense information, process it, and in some cases manipulate it. An autonomous agent also needs to behave purposefully to accomplish a particular goal.

An example of an autonomous agent in software could be something like a supply chain management program. The program looks at aspects of the supply chain and can engage in activities like ordering and moving supplies, scheduling personnel, and requesting trucks. These activities all facilitate a larger goal of keeping the supply chain moving in an organised fashion. This differs from an automated system that can react simplistically; perhaps it orders new supplies when a factory starts to run low, for example, in response to a trigger in the programming.

Cognitive Science

the study of thought, learning, and mental organisation, which draws on aspects of psychology, linguistics, philosophy, and computer modelling.

Brain – the understanding of neurobiological processes and phenomena

Behaviour – the experimental methods and findings from the study of psychology, language, and the sociocultural environment

Computation – the powers and limits of various representations, coupled with studies of computational mechanisms

Intelligent Entities

an intelligent thing with distance and independent existence

Analogical Reasoning

http://changingminds.org/disciplines/argument/types_reasoning/analogical_reasoning.htm

Process of reasoning from particular to particular, it derives a conclusion from one’s experience in one or more similar situations. The simplest and most common method of reasoning, it is also most fraught with chances of making a mistake. With deductive reasoning and inductive reasoning, it constitutes the three basic tools of thinking.

http://www.businessdictionary.com/definition/analogical-reasoning.html

Case-Based Reasoning

http://artint.info/html/ArtInt_190.html

In case-based reasoning, the training examples – the cases – are stored and accessed to solve a new problem. To get a prediction for a new example, those cases that are similar, or close to, the new example are used to predict the value of the target features of the new example.

Data Mining

Generally, data mining (sometimes called data or knowledge discovery) is the process of analysing data from different perspectives and summarising it into useful information – information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analysing data. It allows users to analyse data from many different dimensions or angles, categorise it, and summarise the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases.

http://www.anderson.ucla.edu/faculty/jason.frand/teacher/technologies/palace/datamining.htm

Natural Language Processing

Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction.

http://research.microsoft.com/en-us/groups/nlp/

The goal of the Natural Language Processing (NLP) group is to design and build software that will analyse, understand, and generate languages that humans use naturally, so that eventually you will be able to address your computer as though you were addressing another person.

Turing Test

The Turing test is a test, developed by Alan Turing in 1950, of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

http://psych.utoronto.ca/users/reingold/courses/ai/turing.html

a proposed test of a computer’s ability to think, requiring that the covert substitution of the computer for one of the participants in a keyboard and screen dialogue should be undetectable by the remaining human participant

Machine Reasoning

http://research.microsoft.com/pubs/192773/tr-2011-02-08.pdf

Our world as we know it is running on artificial intelligence. Siri manages our calendars. Facebook suggests our friends. Computers trade our stocks. We have cars that park themselves, and air traffic control is almost fully automated.

A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recogniser, and a language model, using appropriate labeled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text.
This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.