Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research [2014/07/11 09:55]
Christel Dartigues [Random Forests]
research [2015/07/06 17:27] (current)
Frederic Precioso [Boosting]
Line 1: Line 1:
 ====== Research Topics ====== ====== Research Topics ======
  
-MinD research group aims at developping algorithms for data mining and machine learning with a focus on large-scale. In particular, MinD has an expertise on [[research#​Concept Lattices|Concept Lattices]], [[research#​Evolutionary Computation|Evolutionary Computation]],​ [[research#​Multi-Agent Systems|Multi-Agent Systems]], [[research#​Naïve Bayes|Naïve Bayes]], [[research#​Random Forests|Random Forests]], [[research#​Support Vector Machines|Support Vector Machines]], ...+MinD research group aims at developping algorithms for data mining and machine learning with a focus on large-scale. In particular, MinD has an expertise on [[research#​Concept Lattices|Concept Lattices]], [[research#​Evolutionary Computation|Evolutionary Computation]],​ [[research#​Multi-Agent Systems|Multi-Agent Systems]], [[research#​Naïve Bayes|Naïve Bayes]], [[research#​Random Forests|Random Forests]], [[research#​Support Vector Machines|Support Vector Machines]], [[research#​Boosting|Boosting]],​ [[research#​Deep Learning|Deep Learning]], ...
 Those methods are used to extract knowledge from [[wp>Big Data]] for : Those methods are used to extract knowledge from [[wp>Big Data]] for :
   * Association rule learning   * Association rule learning
Line 17: Line 17:
 Given a set of instances (objects) described by a list of properties (variables values), the concept lattice is a hierarchy of concepts in which each concept associates a set of instances (extent) sharing the same value for a certain set of properties (intent). ​ Given a set of instances (objects) described by a list of properties (variables values), the concept lattice is a hierarchy of concepts in which each concept associates a set of instances (extent) sharing the same value for a certain set of properties (intent). ​
 Concepts are partially ordered in the lattice according to the inclusion relation: Each sub-concept in the lattice contains a subset of the instances and a superset of the properties in the related concepts above it. Concepts are partially ordered in the lattice according to the inclusion relation: Each sub-concept in the lattice contains a subset of the instances and a superset of the properties in the related concepts above it.
-In data mining, concept lattices serve as a theoretical framework for the efficient extraction of non-redundant ​loss-less condensed representations of [[http://​en.wikipedia.org/​wiki/​Association_rule_learning|association rules]] and hierarchical [[http://​en.wikipedia.org/​wiki/​Biclustering|biclustering]].+In data mining, concept lattices serve as a theoretical framework for the efficient extraction of loss-less condensed representations of [[http://​en.wikipedia.org/​wiki/​Association_rule_learning|association rules]], the generation of [[http://​en.wikipedia.org/​wiki/​Classification_rule|classification rules]], ​and for hierarchical [[http://​en.wikipedia.org/​wiki/​Biclustering|biclustering]].
 ---- ----
 ===== Evolutionary Computation ===== ===== Evolutionary Computation =====
Line 39: Line 39:
 ---- ----
 ===== Support Vector Machines ====== ===== Support Vector Machines ======
 +In machine learning, [[wp>​support vector machines]] (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier.
 +In addition to performing linear classification,​ SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
 +----
 +===== Boosting ======
 +[[https://​en.wikipedia.org/​wiki/​Boosting_(machine_learning)|Boosting]] is a machine learning ensemble meta-algorithm for reducing bias primarily and also variance[1] in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones.[2] Boosting is based on the question posed by Kearns and Valiant (1988, 1989):​[3][4] Can a set of weak learners create a single strong learner? A weak learner is defined to be a classifier which is only slightly correlated with the true classification (it can label examples better than random guessing). In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification.
 +----
 +===== Deep Learning ======
 ---- ----