BLOCKS Reference

Output specifications

HTTP GET / HTTP POST / HTTP PUT

This page details the types of results returned by the HTTP GET, HTTP POST, and HTTP PUT BLOCKS.

Example for responses with Content-Type headers other than "application/json"

Below are the results of accessing a URL (http://info.cern.ch/hypertext/WWW/TheProject.html) with the HTTP GET BLOCK.

"\u003cHEADER\u003e\n\u003cTITLE\u003eThe World Wide Web project\u003c/TITLE\u003e\n\u003cNEXTID N=\"55\"\u003e\n\u003c/HEADER\u003e\n\u003cBODY\u003e\n\u003cH1\u003eWorld Wide Web\u003c/H1\u003eThe WorldWideWeb (W3) is a wide-area\u003cA\nNAME=0 HREF=\"WhatIs.html\"\u003e\nhypermedia\u003c/A\u003e information retrieval\ninitiative aiming to give universal\naccess to a large universe of documents.\u003cP\u003e\nEverything there is online about\nW3 is linked directly or indirectly\nto this document, including an \u003cA\nNAME=24 HREF=\"Summary.html\"\u003eexecutive\nsummary\u003c/A\u003e of the project, \u003cA\nNAME=29 HREF=\"Administration/Mailing/Overview.html\"\u003eMailing lists\u003c/A\u003e\n, \u003cA\nNAME=30 HREF=\"Policy.html\"\u003ePolicy\u003c/A\u003e , November's  \u003cA\nNAME=34 HREF=\"News/9211.html\"\u003eW3  news\u003c/A\u003e ,\n\u003cA\nNAME=41 HREF=\"FAQ/List.html\"\u003eFrequently Asked Questions\u003c/A\u003e .\n\u003cDL\u003e\n\u003cDT\u003e\u003cA\nNAME=44 HREF=\"../DataSources/Top.html\"\u003eWhat's out there?\u003c/A\u003e\n\u003cDD\u003e Pointers to the\nworld's online information,\u003cA\nNAME=45 HREF=\"../DataSources/bySubject/Overview.html\"\u003e subjects\u003c/A\u003e\n, \u003cA\nNAME=z54 HREF=\"../DataSources/WWW/Servers.html\"\u003eW3 servers\u003c/A\u003e, etc.\n\u003cDT\u003e\u003cA\nNAME=46 HREF=\"Help.html\"\u003eHelp\u003c/A\u003e\n\u003cDD\u003e on the browser you are using\n\u003cDT\u003e\u003cA\nNAME=13 HREF=\"Status.html\"\u003eSoftware Products\u003c/A\u003e\n\u003cDD\u003e A list of W3 project\ncomponents and their current state.\n(e.g. \u003cA\nNAME=27 HREF=\"LineMode/Browser.html\"\u003eLine Mode\u003c/A\u003e ,X11 \u003cA\nNAME=35 HREF=\"Status.html#35\"\u003eViola\u003c/A\u003e ,  \u003cA\nNAME=26 HREF=\"NeXT/WorldWideWeb.html\"\u003eNeXTStep\u003c/A\u003e\n, \u003cA\nNAME=25 HREF=\"Daemon/Overview.html\"\u003eServers\u003c/A\u003e , \u003cA\nNAME=51 HREF=\"Tools/Overview.html\"\u003eTools\u003c/A\u003e ,\u003cA\nNAME=53 HREF=\"MailRobot/Overview.html\"\u003e Mail robot\u003c/A\u003e ,\u003cA\nNAME=52 HREF=\"Status.html#57\"\u003e\nLibrary\u003c/A\u003e )\n\u003cDT\u003e\u003cA\nNAME=47 HREF=\"Technical.html\"\u003eTechnical\u003c/A\u003e\n\u003cDD\u003e Details of protocols, formats,\nprogram internals etc\n\u003cDT\u003e\u003cA\nNAME=40 HREF=\"Bibliography.html\"\u003eBibliography\u003c/A\u003e\n\u003cDD\u003e Paper documentation\non  W3 and references.\n\u003cDT\u003e\u003cA\nNAME=14 HREF=\"People.html\"\u003ePeople\u003c/A\u003e\n\u003cDD\u003e A list of some people involved\nin the project.\n\u003cDT\u003e\u003cA\nNAME=15 HREF=\"History.html\"\u003eHistory\u003c/A\u003e\n\u003cDD\u003e A summary of the history\nof the project.\n\u003cDT\u003e\u003cA\nNAME=37 HREF=\"Helping.html\"\u003eHow can I help\u003c/A\u003e ?\n\u003cDD\u003e If you would like\nto support the web..\n\u003cDT\u003e\u003cA\nNAME=48 HREF=\"../README.html\"\u003eGetting code\u003c/A\u003e\n\u003cDD\u003e Getting the code by\u003cA\nNAME=49 HREF=\"LineMode/Defaults/Distribution.html\"\u003e\nanonymous FTP\u003c/A\u003e , etc.\u003c/A\u003e\n\u003c/DL\u003e\n\u003c/BODY\u003e\n"

This is an example of simply accessing a webpage. Because the Content-Type header is "text/html", the Results storage variable stores the webpage's data (HTML).

Example for responses with "application/json" Content-Type headers

The following are examples of an HTTP GET BLOCK’s property configuration and its results.

Example HTTP GET BLOCK properties:

Properties Values
URL https://en.wikipedia.org/w/api.php
Query parameters
Key Value
action query
format json
titles Machine learning
prop revisions
rvprop content

Example results:

{
  "batchcomplete": "",
  "query": {
    "pages": {
      "233488": {
        "pageid": 233488,
        "ns": 0,
        "title": "Machine learning",
        "revisions": [
          {
            "contentformat": "text/x-wiki",
            "contentmodel": "wikitext",
            "*": "{{for|the journal|Machine Learning (journal)}}\n{{Machine learning bar}}\n\n'''Machine learning''' is the subfield of [[computer science]] that \"gives computers the ability to learn without being explicitly programmed\" ([[Arthur Samuel]], 1959).<ref name=\"arthur_samuel_machine_learning_def\">{{cite book | title=Too Big to Ignore: The Business Case for Big Data | publisher=Wiley | author=Phil Simon | date=March 18, 2013 | pages=89 | isbn=978-1-118-63817-0 | url=https://books.google.com/books?id=Dn-Gdoh66sgC&pg=PA89#v=onepage&q&f=false}}</ref> Evolved from the study of [[pattern recognition]] and [[computational learning theory]] in [[artificial intelligence]],<ref name=Britannica>http://www.britannica.com/EBchecked/topic/1116194/machine-learning {{tertiary}}</ref> machine learning explores the study and construction of [[algorithm]]s that can [[learning|learn]] from and make predictions on [[data]]<ref>{{cite journal |title=Glossary of terms |author1=Ron Kohavi |author2=Foster Provost |journal=[[Machine Learning (journal)|Machine Learning]] |volume=30 |pages=271–274 |year=1998 |url=http://ai.stanford.edu/~ronnyk/glossary.html}}</ref> – such algorithms overcome following strictly static [[computer program|program instruction]]s by making data driven predictions or decisions,<ref name=\"bishop\" />{{rp|2}} through building a [[Mathematical model|model]] from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit [[algorithm]]s is unfeasible; example applications include [[spam filter]]ing, [[optical character recognition]] (OCR),<ref name=Wernick-Signal-Proc-July-2010>Wernick, Yang, Brankov, Yourganov and Strother, Machine Learning in Medical Imaging, ''[[IEEE Signal Processing Society|IEEE Signal Processing Magazine]]'', vol. 27, no. 4, July 2010, pp. 25-38</ref> [[Learning to rank|search engines]] and [[computer vision]].\n\nMachine learning is closely related to (and often overlaps with) [[computational statistics]], which also focuses in prediction-making through the use of computers. It has strong ties to [[mathematical optimization]], which delivers methods, theory and application domains to the field. Machine learning is sometimes [[conflate]]d with [[data mining]],<ref>{{cite conference |last=Mannila |first=Heikki |title=Data mining: machine learning, statistics, and databases |conference=Int'l Conf. Scientific and Statistical Database Management |publisher=IEEE Computer Society |year=1996}}</ref> where the latter subfield focuses more on exploratory data analysis and is known as [[unsupervised learning]].<ref name=\"bishop\">Machine learning and pattern recognition \"can be viewed as two facets of the same field.\"</ref>{{rp|vii}}<ref>{{cite journal |last=Friedman |first=Jerome H. |authorlink=Jerome H. Friedman |title=Data Mining and Statistics: What's the connection? |journal=Computing Science and Statistics |volume=29 |issue=1 |year=1998 |pages=3–9}}</ref>\n\nWithin the field of [[data analytics]], machine learning is a method used to devise complex models and algorithms that lend themselves to prediction; in commercial use, this is known as [[predictive analytics]]. These analytical models allow researchers, [[data science|data scientists]], engineers, and analysts to \"produce reliable, repeatable decisions and results\" and uncover \"hidden insights\" through learning from historical relationships and trends in the data.<ref>{{Cite web|url=http://www.sas.com/it_it/insights/analytics/machine-learning.html|title=Machine Learning: What it is and why it matters|website=www.sas.com|access-date=2016-03-29}}</ref>\n\n== Overview ==\n\n[[Tom M. Mitchell]] provided a widely quoted, more formal definition: \"A computer program is said to learn from experience ''E'' with respect to some class of tasks ''T'' and performance measure ''P'' if its performance at tasks in ''T'', as measured by ''P'', improves with experience ''E''.\" <ref>{{cite book\n|author=Mitchell, T. \n|title=Machine Learning\n|publisher=McGraw Hill\n|isbn= 0-07-042807-7\n|pages=2\n|year=1997}}</ref> This definition is notable for its defining machine learning in fundamentally [[Operational definition|operational]] rather than cognitive terms, thus following [[Alan Turing]]'s proposal in his paper \"[[Computing Machinery and Intelligence]]\", that the question \"Can machines think?\" be replaced with the question \"Can machines do what we (as thinking entities) can do?\".<ref>{{Citation |chapterurl=http://eprints.ecs.soton.ac.uk/12954/ |first=Stevan |last=Harnad |year=2008 |chapter=The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence |editor1-last=Epstein |editor1-first=Robert |editor2-last=Peters |editor2-first=Grace |title=The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer |location= |publisher=Kluwer |isbn= }}</ref> In the proposal he explores the various characteristics that could be possesed by a {{em|thinking machine}} and the various implications in constructing one.\n\n=== Types of problems and tasks ===\n{{Anchor|Algorithm types}}\n\nMachine learning tasks are typically classified into three broad categories, depending on the nature of the learning \"signal\" or \"feedback\" available to a learning system. These are<ref name=\"aima\">{{cite AIMA|edition=2}}</ref>\n* [[Supervised learning]]:  The computer is presented with example inputs and their desired outputs, given by a \"teacher\", and the goal is to learn a general rule that [[Map (mathematics)|maps]] inputs to outputs.\n* [[Unsupervised learning]]:  No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end ([[feature learning]]).\n* [[Reinforcement learning]]:  A computer program interacts with a dynamic environment in which it must perform a certain goal (such as [[Autonomous car|driving a vehicle]]), without a teacher explicitly telling it whether it has come close to its goal. Another example is learning to play a game by playing against an opponent.<ref name=\"bishop\" />{{rp|3}}\n\nBetween supervised and unsupervised learning is [[semi-supervised learning]], where the teacher gives an incomplete training signal: a training set with some (often many) of the target outputs missing. [[Transduction (machine learning)|Transduction]] is a special case of this principle where the entire set of problem instances is known at learning time, except that part of the targets are missing.\n\n[[File:Svm max sep hyperplane with margin.png|thumb|A [[support vector machine]] is a classifier that divides its input space into two regions, separated by a [[linear classifier|linear boundary]]. Here, it has learned to distinguish black and white circles.]]\n\nAmong other categories of machine learning problems, [[Meta learning (computer science)|learning to learn]] learns its own [[inductive bias]] based on previous experience. [[Developmental robotics|Developmental learning]], elaborated for [[robot learning]], generates its own sequences (also called curriculum) of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human teachers and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.\n\nAnother categorization of machine learning tasks arises when one considers the desired ''output'' of a machine-learned system:<ref name=\"bishop\" />{{rp|3}}\n* In [[Statistical classification|classification]], inputs are divided into two or more classes, and the learner must produce a model that assigns unseen inputs to one or more ([[multi-label classification]]) of these classes. This is typically tackled in a supervised way. Spam filtering is an example of classification, where the inputs are email (or other) messages and the classes are \"spam\" and \"not spam\".\n* In [[regression analysis|regression]], also a supervised problem, the outputs are continuous rather than discrete.\n* In [[Cluster analysis|clustering]], a set of inputs is to be divided into groups. Unlike in classification, the groups are not known beforehand, making this typically an unsupervised task.\n* [[Density estimation]] finds the [[Probability distribution|distribution]] of inputs in some space.\n* [[Dimensionality reduction]] simplifies inputs by mapping them into a lower-dimensional space. [[Topic modeling]] is a related problem, where a program is given a list of [[natural language|human language]] documents and is tasked to find out which documents cover similar topics.\n\n== History and relationships to other fields ==\n{{see also|Timeline of machine learning}}\nAs a scientific endeavour, machine learning grew out of the quest for artificial intelligence. Already in the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed \"[[neural network]]s\"; these were mostly [[perceptron]]s and [[ADALINE|other models]] that were later found to be reinventions of the [[generalized linear model]]s of statistics.{{clarify|date=November 2016}} [[Probability theory|Probabilistic]] reasoning was also employed, especially in automated medical diagnosis.<ref name=\"aima\" />{{rp|488}}\n\nHowever, an increasing emphasis on the [[GOFAI|logical, knowledge-based approach]] caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.<ref name=\"aima\" />{{rp|488}} By 1980, [[expert system]]s had come to dominate AI, and statistics was out of favor.<ref name=\"changing\">{{Cite journal | last1 = Langley | first1 = Pat| title = The changing science of machine learning | doi = 10.1007/s10994-011-5242-y | journal = [[Machine Learning (journal)|Machine Learning]]| volume = 82 | issue = 3 | pages = 275–279 | year = 2011 | pmid =  | pmc = }}</ref> Work on symbolic/knowledge-based learning did continue within AI, leading to [[inductive logic programming]], but the more statistical line of research was now outside the field of AI proper, in [[pattern recognition]] and [[information retrieval]].<ref name=\"aima\" />{{rp|708–710; 755}} Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as \"[[connectionism]]\", by researchers from other disciplines including [[John Hopfield|Hopfield]], [[David Rumelhart|Rumelhart]] and [[Geoff Hinton|Hinton]]. Their main success came in the mid-1980s with the reinvention of [[backpropagation]].<ref name=\"aima\" />{{rp|25}}\n\nMachine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and [[probability theory]].<ref name=\"changing\" /> It also benefited from the increasing availability of digitized information, and the possibility to distribute that via the [[Internet]].\n\nMachine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on ''known'' properties learned from the training data, [[data mining]] focuses on the [[discovery (observation)|discovery]] of (previously) ''unknown'' properties in the data (this is the analysis step of [[Knowledge discovery|Knowledge Discovery]] in Databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, [[ECML PKDD]] being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to ''reproduce known'' knowledge, while in Knowledge Discovery and Data Mining (KDD) the key task is the discovery of previously ''unknown'' knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.\n\nMachine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some [[loss function]] on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set examples). The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.<ref>{{cite encyclopedia |last1=Le Roux |first1=Nicolas |first2=Yoshua |last2=Bengio |first3=Andrew |last3=Fitzgibbon |title=Improving First and Second-Order Methods by Modeling Uncertainty |encyclopedia=Optimization for Machine Learning |year=2012 |page=404 |editor-last1=Sra |editor-first1=Suvrit |editor-first2=Sebastian |editor-last2=Nowozin |editor-first3=Stephen J. |editor-last3=Wright |publisher=MIT Press}}</ref>\n\n=== Relation to statistics ===\nMachine learning and [[statistics]] are closely related fields. According to [[Michael I. Jordan]], the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.<ref name=\"mi jordan ama\">{{cite web|url=http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan/ckelmtt?context=3 |title=statistics and machine learning|publisher=reddit|date=2014-09-10|accessdate=2014-10-01|language=|author=MI Jordan}}</ref> He also suggested the term [[data science]] as a placeholder to call the overall field.<ref name=\"mi jordan ama\" />\n\n[[Leo Breiman]] distinguished two statistical modelling paradigms: data model and algorithmic model,<ref>{{cite web|url=http://projecteuclid.org/download/pdf_1/euclid.ss/1009213726|title=Breiman               :             Statistical Modeling: The Two Cultures (with comments and a           rejoinder by the author)|author=Cornell University Library|publisher=|accessdate=8 August 2015}}</ref> wherein 'algorithmic model' means more or less the machine learning algorithms like [[Random forest]].\n\nSome statisticians have adopted methods from machine learning, leading to a combined field that they call ''statistical learning''.<ref name=\"islr\">{{cite book |author1=Gareth James |author2=Daniela Witten |author3=Trevor Hastie |author4=Robert Tibshirani |title=An Introduction to Statistical Learning |publisher=Springer |year=2013 |url=http://www-bcf.usc.edu/~gareth/ISL/ |page=vii}}</ref>\n\n== {{anchor|Generalization}} Theory ==\n{{Main article|Computational learning theory}}\nA core objective of a learner is to generalize from its experience.<ref name=\"bishop2006\">{{citation|first= C. M. |last= Bishop |authorlink=Christopher M. Bishop |year=2006 |title=Pattern Recognition and Machine Learning |publisher=Springer |isbn=0-387-31073-8}}</ref><ref>[[Mehryar Mohri]], Afshin Rostamizadeh, Ameet Talwalkar (2012) ''Foundations of Machine Learning'', [[MIT Press]] ISBN 978-0-262-01825-8.</ref> Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.\n\nThe computational analysis of machine learning algorithms and their performance is a branch of [[theoretical computer science]] known as [[computational learning theory]]. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The [[bias–variance decomposition]] is one way to quantify generalization [[Errors and residuals|error]].\n\nFor the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has underfit the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to [[overfitting]] and generalization will be poorer.<ref>Ethem Alpaydin. \"[https://books.google.com/books?id=NP5bBAAAQBAJ&printsec=frontcover&dq=ethem+alpaydin&hl=tr&sa=X&redir_esc=y#v=onepage&q=ethem%20alpaydin&f=false Introduction to Machine Learning]\" The MIT Press, 2010.</ref>\n\nIn addition to performance bounds, computational learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in [[Time complexity#Polynomial time|polynomial time]]. There are two kinds of [[time complexity]] results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.\n\n== Approaches ==\n{{Main article|List of machine learning algorithms}}\n\n=== Decision tree learning ===\n{{Main article|Decision tree learning}}\n\nDecision tree learning uses a [[decision tree]] as a [[predictive modelling|predictive model]], which maps observations about an item to conclusions about the item's target value.\n\n=== Association rule learning ===\n{{Main article|Association rule learning}}\n\nAssociation rule learning is a method for discovering interesting relations between variables in large databases.\n\n=== Artificial neural networks ===\n{{Main article|Artificial neural network}}\nAn [[artificial neural network]] (ANN) learning algorithm, usually called \"neural network\" (NN), is a learning algorithm that is inspired by the structure and functional aspects of [[biological neural networks]]. Computations are structured in terms of an interconnected group of [[artificial neuron]]s, processing information using a [[connectionism|connectionist]] approach to [[computation]]. Modern neural networks are [[non-linear]] [[statistical]] [[data modeling]] tools. They are usually used to model complex relationships between inputs and outputs, to [[pattern recognition|find patterns]] in data, or to capture the statistical structure in an unknown [[joint probability distribution]] between observed variables.\n\n=== Deep learning ===\n{{Main article|Deep learning}}\nFalling hardware prices and the development of [[GPU]]s for personal use in the last few years have contributed to the development of the concept of [[Deep learning]] which consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are [[computer vision]] and [[speech recognition]].<ref>Honglak Lee, Roger Grosse, Rajesh Ranganath, Andrew Y. Ng. \"[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.149.802&rep=rep1&type=pdf Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations]\" Proceedings of the 26th Annual International Conference on Machine Learning, 2009.</ref>\n\n=== Inductive logic programming ===\n{{Main article|Inductive logic programming}}\n\nInductive logic programming (ILP) is an approach to rule learning using [[logic programming]] as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that [[Entailment|entails]] all positive and no negative examples. [[Inductive programming]] is a related field that considers any kind of programming languages for representing hypotheses (and not only logic programming), such as [[Functional programming|functional programs]].\n\n=== Support vector machines ===\n{{Main article|Support vector machines}}\n\nSupport vector machines (SVMs) are a set of related [[supervised learning]] methods used for [[statistical classification|classification]] and [[regression analysis|regression]]. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.\n\n=== Clustering ===\n{{Main article|Cluster analysis}}\nCluster analysis is the assignment of a set of observations into subsets (called ''clusters'') so that observations within the same cluster are similar according to some predesignated criterion or criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some ''similarity metric'' and evaluated for example by ''internal compactness'' (similarity between members of the same cluster) and ''separation'' between different clusters. Other methods are based on ''estimated density'' and ''graph connectivity''.\nClustering is a method of [[unsupervised learning]], and a common technique for [[statistics|statistical]] [[data analysis]].\n\n=== Bayesian networks ===\n{{Main article|Bayesian network}}\n\nA Bayesian network, belief network or directed acyclic graphical model is a [[graphical model|probabilistic graphical model]] that represents a set of [[random variables]] and their [[conditional independence|conditional independencies]] via a [[directed acyclic graph]] (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform [[inference]] and learning.\n\n=== Reinforcement learning ===\n{{Main article|Reinforcement learning}}\n\nReinforcement learning is concerned with how an ''agent'' ought to take ''actions'' in an ''environment'' so as to maximize some notion of long-term ''reward''. Reinforcement learning algorithms attempt to find a ''policy'' that maps ''states'' of the world to the actions the agent ought to take in those states. Reinforcement learning differs from the [[supervised learning]] problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected.\n\n=== Representation learning ===\n{{Main article|Representation learning}}\n\nSeveral learning algorithms, mostly [[unsupervised learning]] algorithms, aim at discovering better representations of the inputs provided during training. Classical examples include [[principal components analysis]] and [[cluster analysis]]. Representation learning algorithms often attempt to preserve the information in their input but transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions, allowing to reconstruct the inputs coming from the unknown data generating distribution, while not being necessarily faithful for configurations that are implausible under that distribution.\n\n[[Manifold learning]] algorithms attempt to do so under the constraint that the learned representation is low-dimensional. [[Sparse coding]] algorithms attempt to do so under the constraint that the learned representation is sparse (has many zeros). [[Multilinear subspace learning]] algorithms aim to learn low-dimensional representations directly from [[tensor]] representations for multidimensional data, without reshaping them into (high-dimensional) vectors.<ref>{{cite journal |first1=Haiping |last1=Lu |first2=K.N. |last2=Plataniotis |first3=A.N. |last3=Venetsanopoulos |url=http://www.dsp.utoronto.ca/~haiping/Publication/SurveyMSL_PR2011.pdf |title=A Survey of Multilinear Subspace Learning for Tensor Data |journal=Pattern Recognition |volume=44 |number=7 |pages=1540–1551 |year=2011 |doi=10.1016/j.patcog.2011.01.004}}</ref> [[Deep learning]] algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.<ref>{{cite book\n | title = Learning Deep Architectures for AI\n | author = Yoshua Bengio\n | publisher = Now Publishers Inc.\n | year = 2009\n | isbn = 978-1-60198-294-0\n | pages = 1–3\n | url = https://books.google.com/books?id=cq5ewg7FniMC&pg=PA3\n }}</ref>\n\n=== Similarity and metric learning ===\n{{Main article|Similarity learning}}\n\nIn this problem, the learning machine is given pairs of examples that are considered similar and pairs of less similar objects. It then needs to learn a similarity function (or a distance metric function) that can predict if new objects are similar. It is sometimes used in [[Recommendation systems]].\n\n=== Sparse dictionary learning ===\n{{Main article|Sparse dictionary learning}}\n\nIn this method, a datum is represented as a linear combination of [[basis function]]s, and the coefficients are assumed to be sparse. Let ''x'' be a ''d''-dimensional datum, ''D'' be a ''d'' by ''n'' matrix, where each column of ''D'' represents a basis function. ''r'' is the coefficient to represent ''x'' using ''D''. Mathematically, sparse dictionary learning means solving <math>x \\approx D r</math> where ''r'' is sparse. Generally speaking, ''n'' is assumed to be larger than ''d'' to allow the freedom for a sparse representation.\n\nLearning a dictionary along with sparse representations is [[strongly NP-hard]] and also difficult to solve approximately.<ref>A. M. Tillmann, \"[http://dx.doi.org/10.1109/LSP.2014.2345761 On the Computational Intractability of Exact and Approximate Dictionary Learning]\",\nIEEE Signal Processing Letters 22(1), 2015: 45–49.</ref> A popular heuristic method for sparse dictionary learning is [[K-SVD]].\n\nSparse dictionary learning has been applied in several contexts. In classification, the problem is to determine which classes a previously unseen datum belongs to. Suppose a dictionary for each class has already been built. Then a new datum is associated with the class such that it's best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in [[image de-noising]]. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.<ref>Aharon, M, M Elad, and A Bruckstein. 2006. \"K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation.\" Signal Processing, IEEE Transactions on 54 (11): 4311-4322</ref>\n\n=== Genetic algorithms ===\n{{Main article|Genetic algorithm}}\n\nA genetic algorithm (GA) is a [[Search algorithm|search]] [[Heuristic (computer science)|heuristic]] that mimics the process of [[natural selection]], and uses methods such as [[Mutation (genetic algorithm)|mutation]] and [[Crossover (genetic algorithm)|crossover]] to generate new [[Chromosome (genetic algorithm)|genotype]] in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms found some uses in the 1980s and 1990s.<ref>{{cite journal |last1=Goldberg |first1=David E. |first2=John H. |last2=Holland |title=Genetic algorithms and machine learning |journal=[[Machine Learning (journal)|Machine Learning]] |volume=3 |issue=2 |year=1988 |pages=95–99 |doi=10.1007/bf00113892}}</ref><ref>{{cite book |title=Machine Learning, Neural and Statistical Classification |first1=D. |last1=Michie |first2=D. J. |last2=Spiegelhalter |first3=C. C. |last3=Taylor |year=1994 |publisher=Ellis Horwood}}</ref> Vice versa, machine learning techniques have been used to improve the performance of genetic and [[evolutionary algorithm]]s.<ref>{{cite journal |last1=Zhang |first1=Jun |last2=Zhan |first2=Zhi-hui |last3=Lin |first3=Ying |last4=Chen |first4=Ni |last5=Gong |first5=Yue-jiao |last6=Zhong |first6=Jing-hui |last7=Chung |first7=Henry S.H. |last8=Li |first8=Yun |last9=Shi |first9=Yu-hui |title=Evolutionary Computation Meets Machine Learning: A Survey |journal=Computational Intelligence Magazine |publisher=IEEE |year=2011 |volume=6 |issue=4 |pages=68–75 |url=http://ieeexplore.ieee.org/iel5/10207/6052357/06052374.pdf?arnumber=6052374 |doi=10.1109/mci.2011.942584}}</ref>\n\n=== Rule-based machine learning ===\n[[Rule-based machine learning]] is a general term for any machine learning method that identifies, learns, or evolves `rules’ to store, manipulate or apply, knowledge.  The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system.  This is in contrast to other machine learners that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.<ref>{{Cite journal|last=Bassel|first=George W.|last2=Glaab|first2=Enrico|last3=Marquez|first3=Julietta|last4=Holdsworth|first4=Michael J.|last5=Bacardit|first5=Jaume|date=2011-09-01|title=Functional Network Construction in Arabidopsis Using Rule-Based Machine Learning on Large-Scale Data Sets|url=http://www.plantcell.org/content/23/9/3101|journal=The Plant Cell|language=en|volume=23|issue=9|pages=3101–3116|doi=10.1105/tpc.111.088153|issn=1532-298X|pmc=3203449|pmid=21896882}}</ref>  Rule-based machine learning approaches include [[learning classifier system]]s, [[association rule learning]], and [[artificial immune system]]s.\n\n=== Learning classifier systems ===\n{{Main article|Learning classifier system}}\n\nLearning classifier systems (LCS) are a family of [[rule-based machine learning]] algorithms that combine a discovery component  (e.g. typically a [[genetic algorithm]]) with a learning component (performing either [[supervised learning]], [[reinforcement learning]], or [[unsupervised learning]]). They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a [[piecewise]] manner in order to make predictions.<ref>{{Cite journal|last=Urbanowicz|first=Ryan J.|last2=Moore|first2=Jason H.|date=2009-09-22|title=Learning Classifier Systems: A Complete Introduction, Review, and Roadmap|url=http://www.hindawi.com/archive/2009/736398/|journal=Journal of Artificial Evolution and Applications|language=en|volume=2009|pages=1–25|doi=10.1155/2009/736398|issn=1687-6229}}</ref>\n\n== Applications ==\nApplications for machine learning include:\n\n{{div col}}\n* [[Adaptive website]]s\n* [[Affective computing]]\n* [[Bioinformatics]]\n* [[Brain-machine interfaces]]\n* [[Cheminformatics]]\n* Classifying [[DNA sequence]]s\n* [[Computational anatomy]]\n* [[Computer vision]], including [[object recognition]]\n* Detecting [[credit card fraud]]\n* [[Strategy game|Game playing]]\n* [[Information retrieval]]\n* [[Internet fraud]] detection\n* [[Marketing]]\n* [[Machine perception]]\n* [[Diagnosis (artificial intelligence)|Medical diagnosis]]\n* [[Economics]]\n* [[Natural language processing]]\n* [[Natural language understanding]]\n* [[Mathematical optimization|Optimization]] and [[metaheuristic]]\n* [[Online advertising]]\n* [[Recommender system]]s\n* [[Robot locomotion]]\n* [[Search engines]]\n* [[Sentiment analysis]] (or opinion mining)\n* [[Sequence mining]]\n* [[Software engineering]]\n* [[Speech recognition|Speech]] and [[handwriting recognition]]\n* [[Stock market]] analysis\n* [[Structural health monitoring]]\n* [[Syntactic pattern recognition]]\n{{div col end}}\n\nIn 2006, the online movie company [[Netflix]] held the first \"[[Netflix Prize]]\" competition to find a program to better predict user preferences and improve the accuracy on its existing Cinematch movie recommendation algorithm by at least 10%.  A joint team made up of researchers from [[AT&T Labs]]-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an [[Ensemble Averaging|ensemble model]] to win the Grand Prize in 2009 for $1 million.<ref>[http://www2.research.att.com/~volinsky/netflix/ \"BelKor Home Page\"] research.att.com</ref> Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns (\"everything is a recommendation\") and they changed their recommendation engine accordingly.<ref>{{cite web|url=http://techblog.netflix.com/2012/04/netflix-recommendations-beyond-5-stars.html|title=The Netflix Tech Blog: Netflix Recommendations: Beyond the 5 stars (Part 1)|publisher=|accessdate=8 August 2015}}</ref>\n\nIn 2010 The Wall Street Journal wrote about money management firm [[Rebellion Research]]'s use of machine learning to predict economic movements. The article describes Rebellion Research's prediction of the financial crisis and economic recovery.<ref>{{cite web|url=http://online.wsj.com/news/articles/SB10001424052748703834604575365310813948080|title='Artificial Intelligence' Gains Fans Among Investors  - WSJ|author=Scott Patterson|date=13 July 2010|work=WSJ|accessdate=8 August 2015}}</ref>\n\nIn 2012 co-founder of [[Sun Microsystems]] [[Vinod Khosla]] predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.<ref>{{cite web|url=https://techcrunch.com/2012/01/10/doctors-or-algorithms/|author=Vonod Khosla|publisher=Tech Crunch|title=Do We Need Doctors or Agorithms?|date=January 10, 2012}}</ref>\n\nIn 2014 it has been reported that a machine learning algorithm has been applied in Art History to study fine art paintings, and that it may have revealed previously unrecognized influences between artists.<ref>[https://medium.com/the-physics-arxiv-blog/when-a-machine-learning-algorithm-studied-fine-art-paintings-it-saw-things-art-historians-had-never-b8e4e7bf7d3e When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed], ''The Physics at [[ArXiv]] blog''</ref>\n\n== Model assessments ==\nClassification machine learning models can be validated by accuracy estimation techniques like the [[Test set|Holdout]] method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the N-fold-[[Cross-validation (statistics)|cross-validation]] method randomly splits the data in k subsets where the k-1 instances of the data are used to train the model while the kth instance is used to test the predictive ability of the training model. In addition to the holdout and cross-validation methods, [[bootstrap]], which samples n instances with replacement from the dataset, can be used to assess model accuracy.<ref>{{cite journal|last1=Kohavi|first1=Ron|title=A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection|journal=International Joint Conference on Artificial Intelligence|date=1995|url=http://web.cs.iastate.edu/~jtian/cs573/Papers/Kohavi-IJCAI-95.pdf}}</ref> In addition to accuracy, [[sensitivity and specificity]] (True Positive Rate: TPR and True Negative Rate: TNR, respectively) can provide modes of model assessment. Similarly [[False positive rate|False Positive Rate]] (FPR) as well as the [[False negative rate|False Negative Rate]] (FNR) can be computed. [[Receiver operating characteristic]] (ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model.<ref>{{cite journal|last1=Catal|first1=Cagatay|title=Performance Evaluation Metrics for Software Fault Prediction Studies|journal=Acta Polytechnica Hungarica|date=2012|volume=9|issue=4|url=http://www.uni-obuda.hu/journal/Catal_36.pdf|accessdate=2 October 2016}}</ref>\n\n== Ethics ==\nMachine Learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use, thus digitizing [[cultural prejudice]]s such as [[institutional racism]] and [[classism]].<ref>{{Cite web|url=http://www.nickbostrom.com/ethics/artificial-intelligence.pdf|title=The Ethics of Artificial Intelligence|last=Bostrom|first=Nick|date=2011|website=|publisher=|access-date=11 April 2016 }}</ref> Responsible [[Data collection|collection of data]] thus is a critical part of machine learning. \n\nBecause language contains biases, machines trained on language corpora will necessarily also learn bias.<ref>[\"https://freedom-to-tinker.com/2016/08/24/language-necessarily-contains-human-biases-and-so-will-machines-trained-on-language-corpora/\"]</ref> \n\nSee [[Machine ethics]] for additional information.\n\n== Software ==\n[[Software suite]]s containing a variety of machine learning algorithms include the following:\n\n=== Free and open-source software ===\n{{Div col||15em}}\n* [[dlib]]\n* [[ELKI]]\n* [[Encog]]\n* [[GNU Octave]] \n* [[H2o (Analytics tool)|H2O]]\n* [[Apache Mahout|Mahout]]\n* [[Mallet (software project)]] \n* [[mlpy]]\n* [[MLPACK (C++ library)|MLPACK]]\n* [[MOA (Massive Online Analysis)]]\n* [[ND4J]] with [[Deeplearning4j]]\n* [[Numenta#The NuPIC Open Source Project|NuPIC]]\n* [[OpenAI]]\n* [[OpenNN]]\n* [[Orange (software)|Orange]]\n* [[R (programming language)|R]]\n* [[scikit-learn]]\n* [[Shogun (toolbox)|Shogun]]\n* [[TensorFlow]]\n* [[Torch (machine learning)]]\n* [[Apache Spark|Spark]]\n* [[Yooreeka]]\n* [[Weka (machine learning)|Weka]]\n{{Div col end}}\n\n=== Proprietary software with free and open-source editions ===\n{{Div col||15em}}\n* [[KNIME]]\n* [[RapidMiner]]\n{{Div col end}}\n\n=== Proprietary software ===\n{{Div col||15em}}\n* [[Amazon Machine Learning]]\n* [[Angoss]] KnowledgeSTUDIO\n* [[Ayasdi]]\n* [[Databricks]]\n* [[Google APIs|Google Prediction API]]\n* [[SPSS Modeler|IBM SPSS Modeler]]\n* [[KXEN Inc.|KXEN Modeler]]\n* [[LIONsolver]]\n* [[Mathematica]]\n* [[MATLAB]]\n* [[Azure machine learning studio|Microsoft Azure Machine Learning]]\n* [[Neural Designer]]\n* [[NeuroSolutions]]\n* [[Oracle Data Mining]]\n* [[RCASE]]\n* [[SAS (software)#Components|SAS Enterprise Miner]]\n* [[Splunk]]\n* [[STATISTICA]] Data Miner\n{{Div col end}}Also see this curated list of packages in many programming languages: [https://github.com/josephmisiti/awesome-machine-learning#awesome-machine-learning- Awesome Machine Learning].\n\n== Journals ==\n* ''[[Journal of Machine Learning Research]]''\n* [[Machine Learning (journal)|''Machine Learning'']]\n* [[Neural Computation (journal)|''Neural Computation'']]\n* ''[[International Journal of Machine Learning and Cybernetics]]''\n\n== Conferences ==\n* [[Conference on Neural Information Processing Systems]]\n* [[International Conference on Machine Learning]]\n\n== See also ==\n{{Portal|Artificial intelligence|Machine learning}}\n{{columns-list|2|\n* [[Adaptive control]]\n* [[Adversarial machine learning]]\n* [[Automatic reasoning]]\n* [[Bayesian structural time series]]\n* [[Big data]]\n* [[Cache language model]]\n* [[Cognitive model]]\n* [[Cognitive science]]\n* [[Computational intelligence]]\n* [[Computational neuroscience]]\n* [[Data science]]\n* [[Ethics of artificial intelligence]]\n* [[Existential risk from advanced artificial intelligence]]\n* [[Explanation-based learning]]\n* [[Glossary of artificial intelligence]]\n* [[List of important publications in computer science#Machine learning|Important publications in machine learning]]\n* [[List of machine learning algorithms]]\n* [[List of datasets for machine learning research]]\n* [[Machine Teaching]]\n* [[Similarity learning]]\n* [[Soft computing]]\n* [[Spike-and-slab variable selection]]\n}}\n\n== References ==\n{{Reflist|30em}}\n\n== Further reading ==\n{{Refbegin|2}} \n* [[Trevor Hastie]], [[Robert Tibshirani]] and [[Jerome H. Friedman]] (2001). ''[http://www-stat.stanford.edu/~tibs/ElemStatLearn/ The Elements of Statistical Learning]'', Springer. ISBN 0-387-95284-5.\n* [[Pedro Domingos]] (September 2015), [[The Master Algorithm]], Basic Books, ISBN 978-0-465-06570-7 \n* [[Mehryar Mohri]], Afshin Rostamizadeh, Ameet Talwalkar (2012). ''[http://www.cs.nyu.edu/~mohri/mlbook/ Foundations of Machine Learning]'', The MIT Press. ISBN 978-0-262-01825-8.\n* Ian H. Witten and Eibe Frank (2011). ''Data Mining: Practical machine learning tools and techniques'' Morgan Kaufmann, 664pp., ISBN 978-0-12-374856-0.\n* [[David J. C. MacKay]]. ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]'' Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1\n* [[Richard O. Duda]], [[Peter E. Hart]], David G. Stork (2001) ''Pattern classification'' (2nd edition), Wiley, New York, ISBN 0-471-05669-3.\n* [[Christopher Bishop]] (1995). ''Neural Networks for Pattern Recognition'', Oxford University Press. ISBN 0-19-853864-2.\n* [[Vladimir Vapnik]] (1998). ''Statistical Learning Theory''. Wiley-Interscience, ISBN 0-471-03003-1.\n* [[Ray Solomonoff]], ''An Inductive Inference Machine'', IRE Convention Record, Section on Information Theory, Part 2, pp., 56-62, 1957.\n* [[Ray Solomonoff]], \"[http://world.std.com/~rjs/indinf56.pdf An Inductive Inference Machine]\" A privately circulated report from the 1956 [[Dartmouth Conferences|Dartmouth Summer Research Conference on AI]].\n{{Refend}}\n\n== External links ==\n* [http://machinelearning.org/ International Machine Learning Society]\n* Popular online course by [[Andrew Ng]], at [https://www.coursera.org/course/ml Coursera]. It uses [[GNU Octave]]. The course is a free version of [[Stanford University]]'s actual course taught by Ng, whose lectures are also [https://see.stanford.edu/Course/CS229 available for free].\n* [https://mloss.org/ mloss] is an academic database of open-source machine learning software.\n\n[[Category:Machine learning| ]]\n[[Category:Learning]]\n[[Category:Cybernetics]]"
          }
        ]
      }
    }
  }
}

This is an example of accessing a Web API that returns JSON format data. Because the Content-Type header is “application/json”, the Results storage variable stores the decoded message-body.

この情報は役に立ちましたか?