{"id":3436,"date":"2024-09-01T14:04:16","date_gmt":"2024-09-01T14:04:16","guid":{"rendered":"https:\/\/workhouse.sweetdishy.com\/?p=3436"},"modified":"2024-09-01T14:04:16","modified_gmt":"2024-09-01T14:04:16","slug":"key-takeaways-3","status":"publish","type":"post","link":"https:\/\/workhouse.sweetdishy.com\/index.php\/2024\/09\/01\/key-takeaways-3\/","title":{"rendered":"Key Takeaways"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li>Machine learning, whose roots go back to the 1950s, is where a computer can learn without being explicitly programmed. Rather, it will ingest and process data by using sophisticated statistical techniques.<\/li>\n\n\n\n<li>An outlier is data that is far outside the rest of the numbers in the dataset.<\/li>\n\n\n\n<li>The&nbsp;standard deviation&nbsp;measures the average distance from the mean.<\/li>\n\n\n\n<li>The normal distribution\u2014which has a shape like a bell\u2014represents the sum of probabilities for a variable.<\/li>\n\n\n\n<li>The Bayes\u2019&nbsp;theorem&nbsp;is a sophisticated statistical technique that provides a deeper look at probabilities.<\/li>\n\n\n\n<li>A true positive is when a model makes a correct prediction. A false positive, on the other hand, is when a model prediction shows that the result is true even though it is not.<\/li>\n\n\n\n<li>The&nbsp;Pearson correlation&nbsp;shows the strength of the relationship between two variables that range from 1 to -1.<\/li>\n\n\n\n<li>Feature extraction or feature engineering describes the process of selecting variables for a model. This is critical since even one wrong variable can have a major impact on the results.<\/li>\n\n\n\n<li>Training data&nbsp;is what is used to create the relationships in an algorithm. The test data, on the other hand, is used to evaluate the model.<\/li>\n\n\n\n<li>Supervised learning&nbsp;uses labeled data to create a model, whereas&nbsp;unsupervised learning&nbsp;does not. There is also semi-supervised learning, which uses a mix of both approaches.<\/li>\n\n\n\n<li>Reinforcement learning&nbsp;is a way to train a model by rewarding accurate predictions and punishing those that are not.<\/li>\n\n\n\n<li>The k-Nearest Neighbor (k-NN&nbsp;) is an algorithm based on the notion that values that are close together are good predictors for a model.<\/li>\n\n\n\n<li>Linear regression&nbsp;estimates the relationship between certain variables. The&nbsp;R-squared&nbsp;will indicate the strength of the relationship.<\/li>\n\n\n\n<li>A&nbsp;decision tree&nbsp;is a model that is based on a workflow of yes\/no decisions.<\/li>\n\n\n\n<li>An ensemble model uses more than one model for the predictions.<\/li>\n\n\n\n<li>The k-Means clustering algorithm puts similar unlabeled data into different groups.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":3326,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[441],"tags":[],"class_list":["post-3436","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-3-machine-learning"],"jetpack_featured_media_url":"https:\/\/workhouse.sweetdishy.com\/wp-content\/uploads\/2024\/08\/images-41-1.jpeg","_links":{"self":[{"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/posts\/3436","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/comments?post=3436"}],"version-history":[{"count":1,"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/posts\/3436\/revisions"}],"predecessor-version":[{"id":3437,"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/posts\/3436\/revisions\/3437"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/media\/3326"}],"wp:attachment":[{"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/media?parent=3436"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/categories?post=3436"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/workhouse.sweetdishy.com\/index.php\/wp-json\/wp\/v2\/tags?post=3436"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}