Commit d77eb321 authored by bderembl's avatar bderembl
Browse files

modifs

parent 36318158
Pipeline #148136 passed with stages
in 4 minutes and 32 seconds
......@@ -703,7 +703,7 @@
"f(x)=\\max(0,x)\n",
"\\end{equation}\n",
"\n",
"This function will \"activate\" a neuron only if the input value is positive. Note that with this activation function, the activation level is not restricted to be between 0 an 1. Advantages of ReLU is that \n",
"This function will \"activate\" a neuron only if the input value is positive. Note that with this activation function, the activation level is not restricted to be between 0 an 1. Advantages of ReLU are that \n",
"- they are cheap to compute (later on, we are going to use millions of these units so we need to take that into account)\n",
"- although its derivative is not continuous, is has nice properties for optimization purposes (the gradient does not vanish for large values of x, more on that later)"
]
......@@ -792,15 +792,13 @@
"\\begin{equation}\n",
"C = \\frac{1}{M}\\sum_{m} \\| \\mathbf y_m - \\mathbf {\\hat y}_m \\|^2\\, ,\n",
"\\end{equation}\n",
"where $\\mathbf y_m$ is the true output of the $m^{th}$ sample and $\\mathbf {\\hat y}_m$ is our estimated value of the output for that sample. The sum spans the entire training set of size $N$. Our task here is to find the best value of the parameters that minimize that cost function.\n",
"where $\\mathbf y_m$ is the true output of the $m^{th}$ sample and $\\mathbf {\\hat y}_m$ is our estimated value of the output for that sample. The sum spans the entire training set of size $M$. Our task here is to find the best value of the parameters that minimize that cost function.\n",
"\n",
"For an activation function $\\sigma$, the cost function for an individual input writes\n",
"\n",
"\\begin{equation}\n",
"C_m = \\| \\mathbf y_m - \\sigma (\\mathbf W \\mathbf x_m) \\|^2\\, .\n",
"\\end{equation}\n",
"\n",
"omit index?\n"
"\\end{equation}\n"
]
},
{
......@@ -899,8 +897,9 @@
"source": [
"The parameter $\\lambda$ is called the **learning rate**.\n",
"\n",
"So in the limit where linearity holds... gradient descent\n",
"\n"
"So in the limit where linearity holds we can compute the little increments in the weights and biases that ensure that the cost function will decrease. This method is called the **Gradient descent**.\n",
"\n",
"1d"
]
},
{
......@@ -932,9 +931,9 @@
"> ***Question***\n",
">\n",
"> - What is the problem if $\\lambda$ is too small? too big?\n",
"> - What happens if the cost function is a complicated function of $\\mathbf w$ with local minima?\n",
"> - What happens if the cost function is a complicated function of $\\mathbf w$ with many local minima?\n",
"\n",
"In practice, the gradient descent method works well but is very slow to converge. There are other methods that have better convergence properties for this iterative process: [Newton-Raphson](https://en.wikipedia.org/wiki/Newton%27s_method), [Conjugate gradient](https://en.wikipedia.org/wiki/Conjugate_gradient_method), etc."
"In practice, the gradient descent method works well but is very slow to converge. There are other methods that have better convergence properties for this iterative process: [Newton-Raphson](https://en.wikipedia.org/wiki/Newton%27s_method), [Conjugate gradient](https://en.wikipedia.org/wiki/Conjugate_gradient_method), etc (see turorial)"
]
},
{
......@@ -949,6 +948,18 @@
"### Hidden layers"
]
},
{
"cell_type": "markdown",
"id": "6c06983e",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"In the perceptron model, there is only a limited amount of complexity that you can model between the input and the output. This complexity is limited by the fact that two variables interact via the weighted sum and then via the sigmoid function. One way to overcome this limitation is to add one or more **hidden layers** of neurons between the input and output layers.\n"
]
},
{
"cell_type": "markdown",
"id": "f3be17ee",
......@@ -958,7 +969,7 @@
}
},
"source": [
"In the perceptron model, there is only a limited amount of complexity that you can model between the input and the output. This complexity is limited by the fact that two variables interact via the weighted sum and then via the sigmoid function. One way to overcome this limitation is to add one or more **hidden layers** of neurons between the input and output layers.\n",
"The reason to add these layers is break down the problem into multiple small task: for the digit recognition that could be \"pick and edge\", \"find a strait line\".\n",
"\n",
"<img alt=\"weather\" src=\"images/hidden_layer.png\" width=400 style=\"float:center\">"
]
......@@ -1124,6 +1135,24 @@
"which is a row vector according to our numerator layout convention"
]
},
{
"cell_type": "markdown",
"id": "bbcebe09",
"metadata": {},
"source": [
"Let's first derive $\\mathbf \\delta^L$ in the last layer. For one individual sample\n",
"\n",
"\\begin{equation}\n",
"C = ( \\mathbf y - \\sigma(\\mathbf z^L))^\\top ( \\mathbf y - \\sigma (\\mathbf z^L))\\, ,\n",
"\\end{equation}\n",
"\n",
"so \n",
"\n",
"\\begin{equation}\n",
"\\mathbf \\delta^L = \\frac{\\partial C}{\\partial \\mathbf z^l} = -2( \\mathbf y - \\sigma(\\mathbf z^L))^\\top \\Sigma' (\\mathbf z^L))\\, ,\n",
"\\end{equation}\n"
]
},
{
"cell_type": "markdown",
"id": "4a1c41dc",
......@@ -1152,7 +1181,9 @@
"metadata": {},
"source": [
"> ***Question***\n",
"> - verify that $\\frac{\\partial C}{\\partial \\mathbf W^l}$ is the same dimension as ${\\mathbf W^l}^\\top$"
">\n",
"> - What is the physical interpretation of this derivative?\n",
"> - *(Optional)* Verify that $\\frac{\\partial C}{\\partial \\mathbf W^l}$ is the same dimension as ${\\mathbf W^l}^\\top$"
]
},
{
......@@ -1223,24 +1254,6 @@
"> - Since $\\Sigma'$ is a diagonal matrix, how are you going to compute the product of these 3 elements in a computer program?"
]
},
{
"cell_type": "markdown",
"id": "bbcebe09",
"metadata": {},
"source": [
"The last step is to derive $\\mathbf \\delta^L$ in the last layer. For one individual sample\n",
"\n",
"\\begin{equation}\n",
"C = ( \\mathbf y - \\sigma(\\mathbf z^L))^\\top ( \\mathbf y - \\sigma (\\mathbf z^L))\\, ,\n",
"\\end{equation}\n",
"\n",
"so \n",
"\n",
"\\begin{equation}\n",
"\\mathbf \\delta^L = \\frac{\\partial C}{\\partial \\mathbf z^l} = -2( \\mathbf y - \\sigma(\\mathbf z^L))^\\top \\Sigma' (\\mathbf z^L))\\, ,\n",
"\\end{equation}\n"
]
},
{
"cell_type": "markdown",
"id": "d243336d",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment