{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "# Apprentissage non-supervisé"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "L'apprentissage non-supervisé se scinde en deux grandes catégories :\n",
    "\n",
    "### La réduction de dimensions (*dimensionality reduction*)\n",
    "\n",
    "- Principal Component Analysis (PCA)\n",
    "- t-SNE, UMAP\n",
    "\n",
    "### Le partitionnement des données (*clustering*)\n",
    "\n",
    "- K-Means\n",
    "- DBSCAN\n",
    "- Gaussian Mixtures"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%matplotlib inline\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "plt.style.use('fivethirtyeight')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Préambule\n",
    "\n",
    "- représentation des données\n",
    "- `scikit-learn`\n",
    "- malédiction de la dimension"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Représentation des données"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "#### Représentation matricielle (convention)\n",
    "\n",
    "En machine learning, on représente généralement les données présentées aux algorithmes sous forme matricielle.\n",
    "\n",
    "`X` est une matrice de taille (`n_samples`, `n_features`), représentant des données qui peuvent être associées à une étiquette représentée par un vecteur `y` de taille `n_samples`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"img01/matrix-representation.png\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "#### Séparation du jeu de données ab-initio\n",
    "\n",
    "Afin de pouvoir calculer l'efficacité des algorithmes et les comparer entre eux, on sépare **toujours** le jeu de données en deux parties dénotées `train` et `test`.\n",
    "\n",
    "L'entrainement des algorithmes `fit()` se fait sur les données d'entrainement `X_train`. \n",
    "\n",
    "Ces algorithmes sont ensuite appliquées aux données de test `X_test` pour l'évaluation."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"img01/train-test-split.png\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "Evaluer sur des données qui n'ont pas servi à l'entrainement permet d'évaluer la capacité de **généralisation** du modèle aux données.\n",
    "\n",
    "Les répartitions les plus communes entre train et test sont de 80%-20% et 70%-30%, en fonction de la taille du jeu de données. \n",
    "\n",
    "**Plus on a de données, plus on peut prendre un jeu de test important.**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### scikit-learn\n",
    "\n",
    "<center><img src=\"img01/sklearn_logo.png\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "Dans ce cours nous allons aborder l'utilisation de la librairie de machine learning principale en Python : `scikit-learn`.\n",
    "\n",
    "```\n",
    "import sklearn\n",
    "\n",
    "# ou de manière générale \n",
    "\n",
    "from sklearn.submodule1 import AlgoA\n",
    "from sklearn.submodule2 import AlgoB\n",
    "```\n",
    "\n",
    "L'avantage de `scikit-learn` est sa simplicité de prise en main. C'est une collection d'algorithmes robustes et éprouvés de machine learning, codés sous forme de classes Python."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "L'utilisation est à peu près toujours la même : \n",
    "\n",
    "1. on instancie un algorithme avec ses paramètres\n",
    "    ```\n",
    "    mon_algo = AlgorithmeA(param1, param2)\n",
    "    ```\n",
    "2. on entraîne l'algorithme avec notre vecteur de données `X_train`\n",
    "    ```\n",
    "    mon_algo.fit(X_train)\n",
    "    ```\n",
    "3. on peut ensuite appliquer notre algorithme entraîné sur un nouveau jeu de données `X_test` et sauver le resultat comme `X_trans`\n",
    "    ```\n",
    "    X_trans = algo.transform(X_test)\n",
    "    ```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"img01/unsupervised-ml-workflow.png\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Malédiction de la dimension"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "Chaque dimension ajoutée à l'espace des paramètres en fait grandir le volume de manière exponentielle.\n",
    "\n",
    "100 points équirépartis sur une longueur unitaire $[0, 1]$ sont distants de $\\Delta_{1D}=10^{−2}=0.01$.\n",
    "\n",
    "Pour conserver la même densité de points pour un hypercube **unitaire** de dimension 10, il faut $100^{10} = 10^{20}$ points.\n",
    "\n",
    "A titre de comparaison, on estime a $10^{21}$ le nombre de grains de sables sur les plages de la Terre combinées."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"img01/sampling_sphere.png\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## La réduction de dimensions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Pour quoi faire ?\n",
    "\n",
    "- compresser l'information\n",
    "- réduire le bruit dans les données\n",
    "- accélérer la convergence du modèle\n",
    "- éliminer des paramètres inintéressants\n",
    "- visualiser les données (2D / 3D)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Approches principales\n",
    "\n",
    "- projections\n",
    "\n",
    "    https://scikit-learn.org/stable/modules/decomposition.html\n",
    "\n",
    "- apprentissage de *manifold* (*manifold learning*)\n",
    "\n",
    "    https://scikit-learn.org/stable/modules/manifold.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Création du jeu de données"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X = (np.random.rand(2, 2) @ np.random.randn(2, 200)).T\n",
    "\n",
    "plt.scatter(X[:, 0], X[:, 1]);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "## Principal Component Analysis or PCA"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "L'analyse en composantes principales se base sur la recherche des directions dans l'espace à N dimensions dont la variance est maximale. \n",
    "\n",
    "Une fois ces directions trouvées, on peut classer les dimensions suivant l'importance relative de leur variance.\n",
    "\n",
    "Réduire la dimension du problème revient dès lors à supprimer les dimensions de variance minimal pour ne conserver que celles sur lesquelles les données sonts épars."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.decomposition import PCA\n",
    "\n",
    "pca = PCA(n_components=2)\n",
    "\n",
    "pca.fit(X)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "print(pca.components_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "print(pca.explained_variance_)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Visualisons ces axes qui maximisent la variance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def draw_vector(v0, v1, ax=None):\n",
    "    ax = ax if ax is not None else plt.gca()\n",
    "    arrowprops = dict(arrowstyle='->', color='k', lw=2, shrinkA=0, shrinkB=0)\n",
    "    ax.annotate('', v1, v0, arrowprops=arrowprops)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plt.scatter(X[:, 0], X[:, 1])\n",
    "for length, vector in zip(pca.explained_variance_, pca.components_):\n",
    "    v = vector * 3 * np.sqrt(length)\n",
    "    draw_vector(pca.mean_, pca.mean_ + v)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Transformons les données pour les exprimer suivant les nouveaux axes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "Y = pca.fit_transform(X)\n",
    "plt.scatter(Y[:, 0], Y[:, 1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Si maintenant on cherche à réduire la dimension, on projette suivant l'axe de plus faible variance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "# On choisit de ne conserver qu'1 dimension\n",
    "pca = PCA(n_components=1)\n",
    "# On calcule la transformation\n",
    "Y = pca.fit_transform(X)\n",
    "# Puis on reprojette dans l'espace initial pour visualiser\n",
    "X2 = pca.inverse_transform(Y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plt.scatter(X[:, 0], X[:, 1], label='données initiales')\n",
    "plt.scatter(X2[:, 0], X2[:, 1], label='données après PCA')\n",
    "plt.legend(frameon=False, loc=4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Passons à des données plus complexes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_digits\n",
    "digits = load_digits()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "fig, axes = plt.subplots(3, 5, figsize=(10, 6), subplot_kw={'xticks': [], 'yticks': []})\n",
    "for i, ax in enumerate(axes.flat):\n",
    "    ax.imshow(digits.images[i], cmap='binary_r')\n",
    "plt.grid(False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10,6));projected_data = PCA(2).fit_transform(digits.data)\n",
    "plt.scatter(projected_data[:, 0], projected_data[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('cubehelix', 10))\n",
    "plt.colorbar()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Comment savoir combien de composantes garder ?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "pca = PCA().fit(digits.data)\n",
    "plt.plot(np.cumsum(pca.explained_variance_ratio_))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Réduction du bruit"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "noisy = np.random.normal(digits.data, 2)\n",
    "\n",
    "fig, axes = plt.subplots(3, 5, figsize=(10, 6), subplot_kw={'xticks': [], 'yticks': []})\n",
    "for i, ax in enumerate(axes.flat):\n",
    "    ax.imshow(noisy[i].reshape(8, 8), cmap='binary_r')\n",
    "plt.grid(False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "pca = PCA(0.65).fit(noisy)\n",
    "components = pca.transform(noisy)\n",
    "components.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "filtered = pca.inverse_transform(components)\n",
    "\n",
    "fig, axes = plt.subplots(3, 5, figsize=(10, 6), subplot_kw={'xticks': [], 'yticks': []})\n",
    "for i, ax in enumerate(axes.flat):\n",
    "    ax.imshow(filtered[i].reshape(8, 8), cmap='binary_r')\n",
    "plt.grid(False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Avantages\n",
    "\n",
    "- très rapide\n",
    "- permet de visualiser les données facilement\n",
    "\n",
    "### Inconvénients\n",
    "\n",
    "- très sensible aux valeurs extrêmes\n",
    "- moins efficace en présence de données sparses (beaucoup de zéros)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "Autres algorithmes à voir sur https://scikit-learn.org/stable/modules/decomposition.html\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Manifold Learning"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "L'idée derrière l'apprentissage de manifold est de trouver un espace de faible dimension (généralement 2D) dans lequel on peut réexprimer des données qui ne sont pas linéairement séparables dans leur espace a $N$-dim, $N > 2$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"img01/manifold.png\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Example with t-SNE"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.manifold import TSNE\n",
    "\n",
    "projected_data = TSNE(n_components=2, init='pca').fit_transform(digits.data)\n",
    "plt.scatter(projected_data[:, 0], projected_data[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('cubehelix', 10))\n",
    "plt.colorbar()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Autres approches\n",
    "\n",
    "- plusieurs algorithmes sont présents dans scikit-learn  \n",
    "    https://scikit-learn.org/stable/modules/manifold.html\n",
    "- un algorithme récent (pas encore dans `sklearn`) fait beaucoup parler de lui: **UMAP**  \n",
    "    https://umap-learn.readthedocs.io/en/latest/index.html\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Clustering"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "- K-means, DBSCAN\n",
    "\n",
    "    https://scikit-learn.org/stable/modules/clustering.html\n",
    "    \n",
    "- composition de gaussiennes (*Gaussian mixture*)\n",
    "\n",
    "    https://scikit-learn.org/stable/modules/mixture.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Jeu de données"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets.samples_generator import make_blobs\n",
    "\n",
    "X, labels = make_blobs(n_samples=200, centers=3, cluster_std=0.9)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def plot_blobs(X, y=None, n=3):\n",
    "    fig, ax = plt.subplots(figsize=(10, 8))\n",
    "    if y is None:\n",
    "        ax.scatter(X[:, 0], X[:, 1])\n",
    "        ax.set_title(\"Raw data\", fontsize=14)\n",
    "    else:\n",
    "        im = ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.get_cmap('viridis', n))\n",
    "        plt.colorbar(im, ax=ax, values=range(n))\n",
    "        ax.set_title(\"Labeled data\", fontsize=14)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plot_blobs(X)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plot_blobs(X, labels, 3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### K-Means\n",
    "\n",
    "https://www.naftaliharris.com/blog/visualizing-k-means-clustering/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.cluster import KMeans\n",
    "\n",
    "kmeans = KMeans(n_clusters=3).fit(X)\n",
    "y_kmeans = kmeans.predict(X)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plot_blobs(X, y_kmeans, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "kmeans = KMeans(n_clusters=2).fit(X)\n",
    "y_kmeans = kmeans.predict(X)\n",
    "plot_blobs(X, y_kmeans, 2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### DBSCAN\n",
    "\n",
    "https://www.naftaliharris.com/blog/visualizing-dbscan-clustering/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.cluster import DBSCAN\n",
    "\n",
    "dbscan = DBSCAN(eps=0.7, min_samples=6)\n",
    "y_dbscan = dbscan.fit_predict(X)\n",
    "\n",
    "print(f\"n_clusters = {len(set(labels))}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plot_blobs(X, y_dbscan, len(set(labels)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "DBSCAN a l'avantage de ne pas demander d'avance le nombre de clusters, mais nécessite de trouver les valeurs adéquates pour epsilon et n_min."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Ajoutons de la corrélation à nos données"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "X2 = X @ np.random.rand(2, 2)\n",
    "\n",
    "plot_blobs(X2, labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "### Comment se comporte un algo comme K-Means sur données très corrélées ?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "y2_kmeans = KMeans(n_clusters=3).fit_predict(X2)\n",
    "plot_blobs(X2, y2_kmeans)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "Comme on aurait pu l'imaginer, la distance euclidienne sur des données très corrélées n'est pas forcément un bon estimateur de groupe. Voyons si d'autres algorithmes ne seraient pas plus appropriés."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "## Gaussian mixture models (GMM)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "Un _Gaussian mixture model_ tente de représenter la distribution des données par un ensemble de gaussiennes à N dimensions. Il devrait naturellement être plus flexible pour représenter des données très corrélées."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "source": [
    "<center><img src=\"img01/gmm.png\"></center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.mixture import GaussianMixture\n",
    "\n",
    "gmm = GaussianMixture(n_components=3).fit(X2)\n",
    "y2_gmm = gmm.predict(X2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plot_blobs(X2, y2_gmm)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "On retrouve le même résultat qu'avec le K-Means sur nos données très corrélées. Notre cas est assez particulier car la correlation a été ajoutée artificiellement avec une matrice de covariance et appliquée à l'ensemble des groupes. Si on le spécifie à l'algorithme, on lui permet de s'adapter."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "gmm = GaussianMixture(n_components=3, covariance_type='tied').fit(X2)\n",
    "y2_gmm = gmm.predict(X2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "gmm.predict_proba(X2)[:5].round(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "plot_blobs(X2, y2_gmm)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Conclusion"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "Les algorithmes d'apprentissage non-supervisé sont très faciles d'utilisation et permettent d'explorer la distribution de nos données ainsi que les groupes et corrélations sous-jacentes. Comme nous le verrons par la suite, ils sont très utilisés pour préparer les données aux tâches d'apprentissage supervisé."
   ]
  }
 ],
 "metadata": {
  "celltoolbar": "Diaporama",
  "kernelspec": {
   "display_name": "Python 3.7.4 64-bit ('3.7.4': pyenv)",
   "language": "python",
   "name": "python37464bit374pyenv5d13ff55ab7742379a14cd8b63849253"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}