# Machine Learning vs Traditional Programming

During the last two decades, we have witnessed rapid development in the field of computer science. Thus, phrases like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) have become of great interest to a majority of developers, engineers, and students. It is then understandable why so many questions, regarding these topics, continue to […]

###### by Nedko Nedev

February 2, 2021 During the last two decades, we have witnessed rapid development in the field of computer science. Thus, phrases like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) have become of great interest to a majority of developers, engineers, and students.

It is then understandable why so many questions, regarding these topics, continue to arise frequently. In this post, I will try to shed some light on the difference between machine learning and traditional programming.

So, to understand how traditional (rule-based) programming differentiates from the ML concept, let’s first clear out what the first one of these two terms means.

Traditional programming has gone a long way from the first calculating machines in the ’40s to the modern high-level languages and frameworks we use today. Nevertheless, the core principles behind it remain the same.

Suppose we are given a problem to solve. What we usually do as programmers is to first analyze it and understand the requirements. Then, we come up with some test examples with input data and the expected output to examine our solution. After that (or during the previous step), we try to think of patterns and logic, which, translated into a programming language, produce the desired output.

Briefly said, we use some input data and combine it with logical rules to accomplish a certain output. The key point here is: The rules are by no means automatically generated. Programmers themselves are the authors of these logical rules.

To make things clearer, let’s take a look at an example. The task is to write a program that estimates people’s weight based on their height. We are also presented with some test data:

 Height (cm) Weight (kg) 160 55 170 65 180 75 190 85

After analyzing the dataset, we come up with a solution in the form of a function f that takes input parameter – height (h): f(h) = h – 105. We can now write a program in any language we want, which takes height as an input and returns weight, by applying the function that we just figured out. This simple program is the answer to the task.

In reality, we have to deal with much more complex problems but the approach that we follow is more or less the same.

What will happen though, when we take into account various factors apart from height such as physical activity, obesity rate, medical issues? All of a sudden it becomes extremely difficult to find a unique formula to solve this. Of course, we can divide the task into smaller problems with fewer arguments, construct a partial solution and then consider the other parameters, splitting the logic with “if statements”. This conventional approach to solving the task might work, eventually. We just witnessed, though, how scaling the problem up suddenly became a challenge.

The real-life equivalent of such a task would require using a lot of CPU power to cope with the huge amount of calculations needed. Not to mention the memory that would be utilized. So, we began to see where the bottleneck of traditional programming is.

There are certain fields of work that have proved problematic when approached in a rule-based manner. Here are a few examples:

• Image Recognition
• Virtual Personal Assistants
• Videos Surveillance
• Online Customer Support
• Self-Driving Vehicles
• Search Engines
• Social Media Services

So the question is, how did we manage to resolve such problems and make so many technological advancements given the limitations of traditional programming?

## Machine Learning

The concept of machine learning has been with us for over 60 years. It was not until the beginning of the century, though, that businesses realized it can dramatically increase calculation potential. As a result, they started investing a lot of resources in the field to gain an advantage over their competition. And so, two decades later, we have achieved great success in the area, given how widely ML is used.

But how exactly was the idea behind machine learning born and what this concept is all about?

To understand where the notion of machine learning came from, we have to shift our focus in a different direction, namely, how the human brain works. For instance, let’s consider the ability to multiply two numbers, say a and b. If we think of it as a machine process, we know that, on a low level, computers do additions of number a, b number of times, saving temporary results in memory.

Now, consider how you learned to multiply numbers. Did you just start adding numbers again and again, or you were taught the multiplication table? We have learned to multiply by remembering predefined results for some numbers, which has nothing to do with how we programmed computers to do the calculations.

In the middle of the last century, people began to understand that simulating the way the brain works and applying it to the field of computer science, can achieve significant results. So, this explains how we came up with the idea behind machine learning. But still, what does it really mean?

Here is how the author of the term ‘machine learning’, Arthur Samuel, described it:

Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed.

This definition strikes an immediate contrast between ML and what we described earlier as traditional programming. When we write programs in a rule-based manner, we explicitly instruct the machine what logic (algorithms) it should use and when (if statements) to apply each function.

On the contrary, following a machine learning approach requires supplying the computer with enough historical data to learn from and let it create the rules on its own. Sounds a little bit crazy at first, right?

Let’s go back to the example with the task about heights and weights and see how we can apply this approach.

First, we feed the table of heights and respected weights into the machine memory. Suppose we have already figured that the required function f is linear, meaning f(h) = a.h + b, where a and b are unknown variables. Next, we start assigning specific values to these variables and test our model (using the input data) against the given output, which we know in advance. At that point, unless we are extremely lucky, there will be some deviations from the correct answers. So, we continue to adjust the parameters – the process of learning from trial and error – until we finally conclude that a equals 1 and b equals -105.

In practice, the machine learning models that are being used are far more complex than our example function and the test datasets are significantly larger. Nevertheless, the basic idea is the same – we test our model against the dataset and try to maximize its accuracy by adjusting certain parameters.

## Summary

To summarise, both traditional programming and machine learning have their place under the sun and are by no means interchangeable. Following the rule-based approach is preferred in situations where the problem is of an algorithmic manner and there are not so many parameters to consider when writing the logical rules. On the other hand, for projects that involve predicting output or identifying objects in images, machine learning has proven to be much more efficient.

Having said all that, it is up to you to decide which one to put your efforts into – traditional programming or machine learning. Why not both?

Categories