Traditional Programming vs Machine Learning w/ example
In traditional programming for the input x, we specify the rules in a function f(x), as the function is pre-defined we know what result y should be.
Say x = 1 (input), the function is x * 7, we know the result y will be 7.
In machine learning we define a model which takes in the input x and parameters (theta Θ). Both together produce a result y, which we then evaluate to see if we reached the desired value. If not then we continue to adjust the parameters.
So how does this work in practice?
Say x = 1 (input), y = 7 (result) and the function is 1 * Θ = 7, we need to predict Θ so the iterations may go like this:
1 * 1 = 1 (result difference is 6, usually we use MSE to calculate the different but i’m not doing this now to simplify the point)
1 * 2 = 2 (result difference is 5)
1 * 3 = 3 (result difference is 4)
1 * 4 = 4 (result difference is 3)
1 * 5 = 5 (result difference is 2)
1 * 6 = 6 (result difference is 1)
1 * 7 = 2 (result difference is 0)
So Θ is eventually set to 7 in the program.
This doesn’t make sense for a single parameter but say you wanted to predict the house price and there are 100+ parameters. We could use this technique to iterate through the different parameter settings to get to the lowest difference for the result.