📝

# Cost function

AI Subset
🧠 Machine Learning
🔵
Definition:

Cost function in machine learning is a function that estimates the accuracy of a model as it trains, by measuring the difference between the predicted output and the desired output. It helps to determine how accurate the model's predictions are and when to stop training so that the model does not overfit.

🟡
Simplified Example:

A simplified example would be a linear regression, where the goal is to minimize the sum of squared errors between each data point and its closest prediction on the line. The cost function for this example would be:

``Cost = 1 / 2 M * Σ(Predicted - Actual) ^ 2``

where M is the number of training examples, Predicted is the predicted output and Actual is the desired output.

A cost function is like a math equation that compares how close the answer you got was to the correct answer.

When you use machine learning, it's trying to find the answers to questions by looking at data.

The cost function measures how close it gets to finding the right answer. So if your machine learning algorithm gets the right answer, then the cost function will give you a score of zero - which means it did great!

But if it's not so close, then the cost function will give you a higher score - telling you that something needs to be changed in order for your machine learning algorithm to work better.

🟢
Simplified Summary:

A cost function is an equation used in machine learning to measure the performance of an algorithm. It compares the predicted value with the actual value to calculate an error rate, which can be used to identify areas for improvement in the algorithm's performance.

## FAQ

### Why cost function is squared?

Cost functions are typically squared in order to reduce error and make the function more sensitive to changes in inputs. Squared cost functions lead to more accurate outputs and allow for a smoother curve as opposed to non-squared cost functions, which can increase the risk of wrongly predicting values. Squaring removes large changes in the output when small changes occur in the input, which helps minimize errors.