Lesson1.rtf

Please download to get full document.

View again

of 2
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Documents

Published:

Views: 9 | Pages: 2

Extension: RTF | Download: 0

Share
Related documents
Description
Machine Learning: Lesson 1 Supervised Learning - A set of “training” data (data that is given to be “correct”) is given and a program makes a prediction (extrapolation/interpolation). The prediction is a regression. Unsupervised Learning - A given data set consists of undetermined information. The goal is to create a program that will find a structure amongst the given data set. Structure can be a cluster. Clusters can be studied and programs can gather information about them that will allow
Transcript
  Machine Learning: Lesson 1Supervised Learning -> A set of “training” data (data that is given to be “correct”) is given and a program makes a prediction (extrapolation/interpolation). The prediction is aregression. Unsupervised Learning -> A given data set consists of undetermined information. The goal is to create a program that will find a structure amongst the given data set. Structure can be a cluster. Clusters can be studied and programs can gather informationabout them that will allow data scientists to explore further.Hypothesis Function -> $h_{\theta}(x)= \theta_0 + \theta_1\cdot x$ where h is the hypothesis function, $x$ is the input value for a given data set, and $\theta_0$ and $\theta_1$ are parameters that help compute the cost function. The hypothesis function describes a set of data using a regression so that supervised learning can be applied to such a set. Depending on the values substituted for the parameters, a minimum cost function can be calculated to find the best regression to fit any data set.Cost Function -> is a univariate (or multivariate) function that takes a hypothesis functionand a data set, returning a value that describes how well such a hypothesis function fit agiven data set. Minimizing the cost function is a common task endowed upon data scientists so that supervised learning can be applied to a given data set.Cost Function Formula -> $J_(\theta_1,\theta_2) = \frac{1}{2m}\sum_{i=0}^{N}(h_{\theta}(x^(i)) - y^(i))^2$ Where $m$ is the number of “training” data in a given data set and $i$ is the i’th datum in a set of “training” data. Gradient Descent Algorithm -> An algorithm that runs through a contour plot of a cost function with two or more parameters ($\left\{\theta_0,\theta_1,\theta_2,\dotsc,\theta_n\right\}$ and traverses the plot to find the local minimum. Depending on where the algorithm sets its starting point, a different minimum might be found. Gradient Descent Algorithm Implementation ->  /**  Convergence is of type value returned by Cost Function, which ought to be the lowest point on a contour plot. If the current point is the smallest nearby, i.e. Cost Function is minimized, then end loop return the point of convergence.**/while(!convergence){for(int j = 1, j < n; j++){$\theta_j = \theta_j + \alpha \frac{\partial}{\partial \theta_j} J(\theta_0,\theta_1)$ /**where $\alpha$ is the learning rate and $\frac{\partial}{\partial \theta_j}$ is the change in the position along the contour plot**/}}
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks