top of page

Cristian Dordea

Mar 10, 2024

A new Agile Framework for Data Science and AI/ML projects

A review of a new Agile Approach for Data Science and AI/ML projects and how it compares to Scrum

  • This edition reviews a new agile framework designed specifically to manage Data science and Machine Learning teams & projects. Recently, I got certified in this new agile framework, and today, I will share with you what I learned about this new agile framework and how it is different then Scrum. 

  • As always, see AI for Business News You May Have Missed at the end of our newsletter


In previous posts, we discussed How Machine Learning systems stand out from traditional software systems as well as Why Scrum Doesn't Work For Data Science/ML Teams. As mentioned in our last post, there is a new agile framework called Data Driven Scrum that is designed to be compatible with Data Science and Machine Learning (ML) projects. 

Recently, I got certified in this new agile framework, and today, I will share with you what I learned about Data Driven Scrum and how it is different then Scrum. 

What is Data Driven Scrum? 

Data Driven Scrum is an agile framework that was designed specifically for data science and machine learning. The Data Driven Scrum approach was brought to market by the Data Science Process Alliance in 2019.

Why do we need a new agile framework in the first place?

Data science and ML projects bring some new challenges that are not as common in software development. If you tried to apply Scrum or Kanban to ML projects, you would find very quickly that:

  • Estimating the work items is unreliable and difficult due to data and model uncertainty 

  •  This issue makes it challenging to figure out what can be done in a Scrum Sprint 

  • The experimentation nature of work makes the output unpredictable, which results in higher   chances of failure 

Data Driven Scrum was created to mitigate these challenges.

How is it Data Driven Scrum different than Scrum?

The Data Driven Scrum (DDS) is similar to Scrum, but DDS does introduce a couple of new concepts that makes it stand out from Scrum. 

The main new concept introduced is not following time-based iterations like Scrum does. In Scrum, the team will pick the time-based iteration from 1 week in length up to 4 weeks. Then the team commits to work that fits within the length of the iteration, fixed by the team ahead of time. 

Data Driven Scrum uses capability-based iterations instead of time-based iterations. This means the team commits to a specific capability and the team works on it until the capability is complete, without being limited by a fixed length of time. 

The iteration is open-ended from the perspective of time. This means a capability can take 3 days, 1 week, or sometimes even 2 or 3 weeks. The team still strives to break down the capability into the smallest work possible that delivers value. However, the team is not able to predict how long the work will take because of the experimental nature of the work and large unknowns that are typical in a Data Science or ML project. 

If you are more familiar with software development, you can think of a capability as a feature. A capability is broken down into multiple items in a backlog. The items could be user stories or hypotheses. In data science, to finish a capability, you most likely have to do an experiment, which most of the time results in the creation of some kind of model.

 A typical data science process starts with coming up with the question you are trying to answer, and then you do your best to understand the data available, prepare the data, and create a model for answering your initial question. Once the model is created, you evaluate the findings by examining, reviewing, and aligning against your main objective of the capability. If the outcome is satisfactory, you can deploy the model to production, depending on your use case. 

Another big difference in Data Driven Scrum (DDS) is the introduction of breaking down each item or user story in create, observe, and analyze steps.  This concept is a more detailed form of acceptance criteria, similar to the sub-task concept. The purpose of “create, observe, analyze” is to make sure proper validation is done for each item completed by the DDS team. The “create” step should explain clearly what is to be created as part of this specific item. Under the “observe” step, list the set of observables, and under the “analyze” step, measure those observables and create a plan for the next iteration. Only after the “create, observe, and analyze” steps are done can the team consider the work complete, and the item can move to the “done” column on the task board. 

Source: Data Science Process Alliance

Data Driven Scrum Events

The main events suggested by Data Driven Scrum are Backlog Item Selection, Daily Meetings, Iteration Review and Retrospective. These are pretty similar to Scrum. The interesting difference here is that meetings are decoupled from the iteration.

As mentioned earlier, the iteration is capability-based and each different iteration will have different lengths of time. Some iterations could be as small as 2 or 3 days if the team only needs to do an exploratory analysis. In this case, it doesn’t make sense for the retrospective meeting to happen at the end of the iteration. Data Driven Scrum recommends the meetings to be recurring calendar based. For example, the Iteration Review and Retrospective meeting could be done weekly. If the team does not complete their capability by that time, the meeting can be canceled and the review and retro can be done the week following.

Data Driven Scrum Roles 

The roles suggested by Data Driven Scrum, are pretty similar to Scrum: Product Owner, Process Expert and Development Team. The process expert role focuses on the DDS process and team, similar role as the Scrum Master in Scrum.

How to manage delivery expectations in Data Driven Scrum?

The moment I understood the capability-base iteration concept, my first question was, how do you manage delivery expectations with stakeholders without knowing when the iteration is done?

Most teams in an enterprise have to manage project delivery expectations or milestones.

As I noticed myself when I worked with Data Science and ML teams, trying to make commitments to stakeholders at the iteration level is pretty difficult compared to typical software development work. This is mainly due to data and model uncertainty and the experimentation nature of data science work.

The Data Driven Scrum manages delivery expectations at the Product Increment level.

Product Increment is a concept primarily used in scaled agile. Product Increment is a sum of multiple product backlog items completed during a time-box using multiple iterations. The time-box is typically anywhere from 1 to 3 months, picked by the team. 

The Product Owner defines the Product Increment goal, and in collaboration with the team, they pick the increment timebox. Other times, the increment timebox might be a set time by the organization or stakeholders, to which the team has to adjust the goal to fit within that specific timebox, let’s say 1 month. 

The team with the product owner gets together during a product refinement where they identify the capabilities they think they need in order to achieve the goal. It is very common for multiple teams to work together towards a common goal under one Product Increment or to coordinate dependencies across teams. During the refinement, the work is broken down into high-level items. These items are then estimated in T-shirt sizes and prioritized. Next, the team picks the items they think they will need to achieve the first capability. This becomes their first capability iteration.

 In DDS, the teams will have as many capabilities iterations necessary to complete the PI goal they committed to. All capability iterations within a PI will most likely be different in length. However, the Product Increment (PI) timeline is fixed and picked before the work starts. 

For example, the team can pick the PI to be one month in length and, within that timeline, have a handful of capability iterations. As the team starts working on the capabilities identified, they might make changes in terms of the capabilities needed to meet the PI goal. The team has the freedom and control over how many capabilities they want to use to achieve the PI goal. 

Within that fixed PI timeline, after each capability iteration, the team will learn new lessons and receive additional feedback from stakeholders. Based on these learnings, sometimes the capabilities and even the Product Increment Goal might need adjustment. The team and stakeholders don’t have to wait until the end of PI to adjust their goal as long as they all agree on the adjustment. 

This could result in the Product backlog being refined and reprioritized. If needed, the team with the stakeholders will make any necessary changes towards the next capability iteration and continue to make progress towards their PI goal.

As a delivery lead, if you worked with Scrum, Kanban and SAFe, the DDS process is pretty straightforward, however in my opinion, the challenge will be in training the rest of the team members and especially the stakeholders. Data Driven Scrum awareness and training will be needed for stakeholders and adjacent teams where dependencies exist, to set the right expectations and set up everyone for success. 

In my opinion, awareness of the challenges of Data science and ML work compared to typical software development is as important for stakeholders and management as it is for learning this new agile framework. Nick and Jeff from Data Science Process Alliance have lots of articles on Data Driven Scrum and a great training programs listed on their website. Go check it out.

Keep in mind that as interest in AI projects in enterprises increases, more data teams will be looking for a better way to handle them. Data-driven Scrum is definitely a good tool to have in your toolbox as a delivery lead.

Until next time.

Generative AI for Business News You May Have Missed
  • Anthropic’s latest AI model beats rivals and achieves industry first

    Anthropic’s latest cutting-edge language model, Claude 3, has surged ahead of competitors like ChatGPT and Google's Gemini to set new industry standards in performance and capability.(read more)

  • The OpenAI-Elon Musk battle intensifies and AI trust sinks, but investors don’t seem to care (read more)

  • Google engineer stole AI tech for Chinese firms

    A former Google engineer has been charged with stealing trade secrets related to the company's AI technology and secretly working with two Chinese firms.(read more)

  • Wipro and IBM collaborate to propel enterprise AI

    In a bid to accelerate the adoption of AI in the enterprise sector, Wipro has unveiled its latest offering that leverages the capabilities of IBM’s watsonx AI and data platform. (read more)

  • Reddit is reportedly selling data for AI training

    Reddit has negotiated a content licensing deal to allow its data to be used for training AI models, according to a Bloomberg report. (read more)

  • SAP beefs up its Datasphere platform with yet more generative AI

    Enterprise software giant SAP SE is adding to its generative artificial intelligence capabilities with a raft of new features that will soon become available in the SAP Datasphere platform.(read more)

  • Stack Overflow inks AI partnership with Google Cloud

    Stack Overflow and Google LLC today announced a partnership that will make coding knowledge from the former company available through a chatbot interface.(read more)

AI Training & Certifications
  • Data Driven Scrum

    Agile Framework Certifications for Data Science & Machine Learning Teams

  • Andrew Ng Founder of DeepLearning launches AI for Everyone course:

    AI for Everyone”, a non-technical course, will help you understand AI technologies and spot opportunities to apply AI to problems in your own organization.


  • Generative AI for Executives by AWS

    This class shows you how to mitigate hallucinations, data leakage, and jailbreaks. Incorporating these ideas into your development process will make your apps safer and higher quality.

  • Introduction to Artificial Intelligence (AI) by IBM on Coursera

    In this course you will learn what Artificial Intelligence (AI) is, explore use cases and applications of AI, understand AI concepts and terms like machine learning, deep learning and neural networks.

  • Microsoft Certified: Azure AI Engineer Associate - Certifications

    Target audience: Professionals tasked with building, managing, and deploying AI solutions using Azure AI, covering all phases of AI solution development.

bottom of page