top of page

Cristian Dordea

Jan 19, 2024

How Machine Learning systems stand out from traditional software systems

To better understand why a different management approach is needed, we first need to see how traditional software development differs from data science and machine learning.

  • This edition reviews the main differences in how machine learning compares to software development from the development process, testing, deploying, and production perspective.

  • As always, see AI for Business News You May Have Missed at the end of our newsletter


Most project management and Agile delivery frameworks applied in software development are not fully compatible with data science and machine learning work. To better understand why a different management approach is needed, we first need to see how traditional software development differs from data science and machine learning. In this newsletter, we will focus on the core differences between the two.

In future editions, we will discuss why managing machine learning and data science projects needs to be done differently than software development projects

ML systems stand out from traditional software in several ways. First, let's look at it from the perspective of fundamental differences. Then, we will look at it from the development process, testing, deploying, and production perspective.

The first big fundamental differences

Deterministic Logic vs. Probabilistic Nature

Traditional software operates on deterministic logic, which means the output is predictable and directly tied to the input and the specific instructions given.

For example, in a traditional e-commerce website, when a user searches for a specific product name, the software deterministically fetches and displays the exact product. The results are consistent and directly tied to the input query. ML systems, on the other hand, are probabilistic. ML projects are exploratory, dealing with uncertainties and probabilities. They make predictions or decisions based on learned patterns from data, which inherently involves a degree of uncertainty and approximation.

Another example is Netflix’s recommendation system. When you finish watching a series, Netflix suggests other shows you might like. These suggestions are based on complex machine learning algorithms analyzing your viewing history, preferences of similar users, and other factors. The recommendations are probabilistic, offering suggestions likely to match your taste, but there's no absolute certainty that you'll like every recommended show.

Data-Driven Development and Continuous Learning

In traditional software, developers write explicit rules and logic to perform specific tasks. Imagine a programmer creating a travel booking website. They write code specifying that if a user selects a round-trip option, the site should prompt for both departure and return dates. The logic is explicit and unchanging unless manually updated by the developer.

ML systems are built differently - they learn to perform tasks by being trained on large datasets. The quality of the data and the training process directly impact the system’s performance, making data science skills crucial. Consider a music recommendation service like Spotify. Rather than manually coding preferences, the system is trained on vast amounts of user listening data. It learns and predicts user preferences based on patterns found in this data. Also, think of a news recommendation engine. It adapts as it encounters new user data and preferences, evolving its recommendations based on current user interactions and not just on its initial training data.

Development Process Differences

After scoping the project or defining the use case, a simple machine learning project life cycle could look something like this:

The development process in ML is all about experimenting. The approach to ML is a lot of mixing and matching different features, algorithms, and so on to see what works. Keeping track of what works and what doesn’t while also trying to reuse code as much as possible can be quite a challenge. ML development typically involves a cycle of hypothesizing, testing, and refining. Developers experiment with various combinations of features (the inputs used to train the model) and algorithms (the mathematical models that learn from these features). This process is essential because it's often not clear at the outset which combinations will yield the most accurate predictions or insights.

A significant part of this experimentation involves feature engineering – the process of selecting, modifying, or creating features to improve model performance. This process is more art than science, requiring intuition, domain knowledge, and iterative testing to identify the most effective features. 

Choosing the right algorithm is another critical aspect. Each algorithm has its strengths and flaws depending on the nature of the data and the problem being solved. Moreover, tuning hyperparameters (settings that govern the algorithm's learning process) is a delicate task that can significantly impact model performance.

Testing in ML

Testing in ML goes beyond the usual software system testing. Along with regular tests, you need to validate your data, check how good your trained model is, and confirm that it does what it was designed to do.

Testing whether the model generalizes well to new, unseen data is critical, not just memorizing the training data (overfitting). This is typically done using a separate validation dataset not used during training.

ML requires the implementation of cross-validation techniques where the training data is divided into subsets to test the model’s performance across different segments of data, ensuring reliability and robustness.

Another big difference compared to software development is post-deployment monitoring. It's important to continuously monitor the model's performance in a production environment to quickly identify and rectify any degradation in performance or unexpected behavior. Equally, to implement mechanisms to detect changes in input data (data drift) or changes in the relationship between input and output data (concept drift), which can affect model performance over time.

Another area of difference is Ethical and Compliance Testing.

It is necessary to assess the model for fairness and ethical implications, ensuring that predictions do not discriminate against certain groups and comply with ethical standards.

Ensuring that the model and its applications adhere to relevant legal and regulatory requirements, including data privacy laws and industry-specific regulations, is also important.

Deploying ML systems

Deploying ML systems is also more complex. It's not just about setting up a trained model to make predictions. You often have to put together an entire pipeline that trains and deploys models automatically. This means automating many steps that data scientists usually do by hand.

In traditional software deployment, the focus is primarily on deploying static code that performs consistent functions. However, for ML systems, you need to automate the entire process flow - from data ingestion, preprocessing, model training, validation, and finally, deployment. This automation ensures that the models are deployed and continuously updated with new data.

Unlike traditional software, ML models may require periodic retraining to maintain accuracy. The pipeline must, therefore, be capable of automatically retraining the model with new data, validating its performance, and deploying it without manual intervention.

ML models in production

When it comes to production, ML models can start underperforming not just because of coding issues, but also because the data they learn from continuously changes. So, you’ve got to keep an eye on your data’s key stats and how your model’s doing in the real world, ready to send alerts or make changes if things start to go off track.

Unlike traditional software, where updates are more predictable and controlled, ML models may require frequent retraining and updating to maintain accuracy. This process involves tweaking code and reprocessing data, retraining models, and validating their performance.

If the underlying data sources change significantly, the model might need adjustments or retraining to adapt to these new data sources or formats.

Differences in Team Skills:

The skills that teams need are different between the two. In ML, teams need data science and statistical analysis expertise to handle its probabilistic nature. This includes skills in selecting, preprocessing, and analyzing data and understanding and applying various ML algorithms. You usually have data scientists or ML researchers in an ML project who are great at playing with data and building models, but might not be highly skilled software engineers.

Generative AI for Business News You May Have Missed
  • As generative AI takes over the cloud, 2024 will be a pivotal year

    Cybersecurity firm Wiz Inc.’s latest research highlights the explosive adoption rate of managed artificial intelligence services and self-hosted AI tools in the cloud. (read more)

  • JFrog-Amazon SageMaker integration aims to streamline machine learning workflows

    Software supply chain company JFrog Ltd. announced today a new integration with Amazon SageMaker to enable developers and data scientists to collaborate efficiently on building, training, and deploying machine learning models. (read more)

  • Pinecone’s vector database goes serverless

    Well-funded vector database startup Pinecone Systems Inc. today announced a serverless version of its product aimed at artificial intelligence applications. (read more)

  • IMF: AI will affect 40% of jobs and could worsen inequality

    Artificial intelligence is set to disrupt 40% of all jobs, according to a new analysis by the International Monetary Fund. (read more)

  • PwC survey finds CEOs are enthusiastic about generative AI’s potential – but worried about the risks

    Some 61% of those in the U. S. say the technology will improve the quality of their products and services and 68% believe it will change the way their company creates, delivers, and captures value over the next three years. (read more)

  • Google Cloud expands its generative AI for retail offerings

    Google Cloud today announced the availability of a number of new artificial intelligence tools, including generative AI systems, designed to help retailers modernize their business operations, better personalize online shopping experiences, and transform in-store technology rollouts. (read more)

  • OpenAI launches GPT Store for custom AI assistants

    Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store..(read more)

AI Training & Certifications
  • Andrew Ng Founder of DeepLearning launches AI for Everyone course:

    AI for Everyone”, a non-technical course, will help you understand AI technologies and spot opportunities to apply AI to problems in your own organization.


  • Generative AI for Executives by AWS

    This class shows you how to mitigate hallucinations, data leakage, and jailbreaks. Incorporating these ideas into your development process will make your apps safer and higher quality.

  • Introduction to Artificial Intelligence (AI) by IBM on Coursera

    In this course you will learn what Artificial Intelligence (AI) is, explore use cases and applications of AI, understand AI concepts and terms like machine learning, deep learning and neural networks.

  • Microsoft Certified: Azure AI Engineer Associate - Certifications

    Target audience: Professionals tasked with building, managing, and deploying AI solutions using Azure AI, covering all phases of AI solution development.

  • Jetson AI Courses and Certifications | NVIDIA Developer

    Target audience: Suitable for anyone interested in AI and Edge AI, from beginners to advanced learners.

bottom of page