This study evaluates the role of Digital Twins in IT project management lifecycles, intending to enhance predictive analytics, decision-making, and real-time workflow optimisation. Drawing upon complexity theory, decision support systems, and Agile methodologies, the study proposes a modular DTT framework capable of simulating project conditions, forecasting risks, and adapting to dynamic changes. A hybrid qualitative methodology comprising a structured literature review, case study analysis, conceptual framework design and simulation evaluation was adopted. Workflow modelling tools such as Draw.io and Figma were used to visualise IT processes and lifecycle design, and an interactive dashboard prototype. Simulated Agile scenarios, including project delays, resource bottlenecks, and testing constraints, were used to validate the conceptual framework. The simulation demonstrated over 90% risk detection accuracy, AI-driven recommendations with response times under three seconds, and a potential reduction of up to five working days in project delivery. In practical terms, the model offers IT teams faster decision-making, improved visibility into risks, and enhanced delivery timelines, making it highly applicable for fast-paced IT environments with possible integration capabilities with tools like Jira, to enable real-time updates during project reviews. This study provides a flexible, data-based Digital Twin Technology (DTT) model that helps teams work continuously, see project issues more clearly, and make decisions early. It shows how IT projects can shift from reacting to problems to predicting and preventing them.
As IT projects become more complex, there is a growing need for advanced tools like simulations and predictive analytics to fix problems like wasted resources and poor risk management (Kabanda, 2020). Studies show that over 70% of IT projects miss their deadlines, budgets, or goals because they lack real-time tracking and rely too much on reacting after problems happen (Baghizadeh et al., 2020; Windapo et al., 2023). Even though Agile and DevOps are popular for managing projects step by step, they still mostly look backward to analyze issues instead of predicting them ahead of time (Perera and Eadie, 2023). Traditional tools help with organizing tasks and teamwork, but they do not offer real-time simulations, AI predictions, or automatic risk checks (Attah et al., 2024; Jeong et al., 2022; Temitope, 2020). For example, widely used platforms like Jira do not provide features like predicting resource needs or simulating project delays, which many experts see as a major weakness (Attah et al., 2024).
Digital Twin Technology (DTT) provides a promising means of resolving the issues that invariably arise in IT project management, but its presence in the field is scarcely noticeable. Instead, other industries, including manufacturing and aerospace, increasingly use DTT regularly to improve performance in real-time and predict the future conditions (Tao et al., 2022). DTT enables IT teams to perform risk mitigation, reallocation of resources, and resource planning with alternative scenarios without the need to shut down running systems and processes by providing scaled virtual proxy models of real systems and processes (Madni et al., 2019). The technology therefore changes the project management position to proactive rather than reactive.
Despite opportunities presented by DTT, there are a number of barriers that still exist. Lack of standardized procedures of connecting with existing enterprise systems, the high computational costs of simulating, and the challenges that come with scalability have been cited as the most problematic limitations (Bravo & Vieira, 2023; Perno et al., 2022). In addition, numerous IT organizations are not skilled in the discussed skillsets, such as those of artificial intelligence, advanced computation, and simulation methods (Schranz et al., 2020; Semeraro et al., 2021). However, according to the increased demand in various industries to utilize AI-driven simulations, the global Digital Twins market is expected to grow at an annual rate of 58 % to reach $48.2 billion by 2026 (Botin-Sanabria et al., 2022; Kebarche et al., 2022).
The major providers of technology, Siemens, Microsoft Azure, IBM, and Cisco have supported the importance of DTT in reinforcing IT infrastructure, Agile development practices, and also in cloud-native architecture. security, and boost predictive analytics (Mohamed & Al-Jaroodi, 2024; VanDerHorn & Mahadevan, 2021). However, challenges still remain, such as making DTT work smoothly with Agile methods, the high cost of setup, and the shortage of skilled workers (Dorrer, 2020; Rayhana et al., 2024). This study explores how DTT can be used to model and manage IT project workflows, helping teams work more efficiently, make better decisions, and achieve more successful project outcomes. It aims to move project management from reacting to problems to predicting and preventing them. The study brings together theories, real-world examples, and practical tools, to tackles key issues like standardization, skills shortages, and testing by using simulations, AI models, and strategic recommendations.
To help solve current problems with standardization, integration, and real-world testing, this paper introduces a Digital Twin Technology (DTT) framework designed specifically for managing IT projects. Rather than building a fully working digital twin, the study focuses on developing the idea using research, case studies, and the latest tech trends. The framework is meant to guide how DTT can be used in IT project management in the future. It also gives useful advice for project managers, professionals, and researchers by combining academic knowledge, real-world examples, and new technologies. Based on past studies (like Schuh et al., 2017), DTT can help spot risks, plan resource needs, and simulate project steps, features this framework aims to include conceptually.
This study adds value in four main ways: (i) new ideas and model design by creating a clear and flexible plan to spot problems early and make better decisions using predictions; (ii) testing in real life by looking at real examples to show how the model works, what issues might come up, and what can be learned; (iii) technology upgrades by using smart tools like AI, simulations, and process improvements to help IT projects run faster and smoother; and (iv) helpful tips for practice by giving easy-to-follow advice and a step-by-step guide for project managers to manage projects better and make smart choices from start to finish.
Digital Twin Technology (DTT) is a powerful tool used in many industries to simulate real situations and predict what might happen next. But even though it is becoming more popular, it has not been widely used or studied in IT project management, especially in Agile and DevOps environments. DTT refers to a digital, real-time model of a system or workflow created to support development and achieve better results by monitoring the progress and simplifying processes and predicting obstacles before they appear (Botin-Sanabria et al., 2022; Schranz et al., 2020). Rechie and Timinger (2021) extend the discussion and determine that DTT is a _digital reflection of the project elements that allow them to track the performance of the development process continuously and make decisions based on this information in the early phases. It was first developed at NASA in the 1970s; they used it to track spaceships and predict possible failures (Madni et al., 2019). The term Digital Twin was officially coined by Michael Grieves to be used in the field of product-management in 2002 (Barricelli et al., 2019). Then, DTT has also been implemented in industries like aerospace, manufacturing and healthcare. However, there has been no single, and common definition of DTT, particularly in IT, disciplines (Fuller et al., 2020; Jeong et al., 2022).
The history of the development of DTT has been made gradually. Before 2015, most of the digital surrogates were very primitive, un-adaptable and inter-connected in a limited way. During the years between 2016 and 2021, artificial intelligence (AI) and machine learning (ML), developed new capabilities DTT to include predictive maintenance, workflow automation, and anomaly detection; however, issues with integration remained (Semeraro et al., 2021). In 2022, the digital twins were to considerably improve responsiveness and resilience thanks to the introduction of enabling technologies, including the IoT and big data analysis and cybersecurity (Fuller et al., 2020; Sharma et al., 2022). Figure 1 illustrates key milestones in DTT’s evolution, emphasising its transition from static models to AI-driven systems
Figure 1: Digital Twin Concept (Adapted from Qi et al., 2021)
Digital Twins (DTS) consists of three connected parts: physical reality, digital representation, and data-exchange tool. The physical reality can be seen as the concrete system or infrastructure (e.g. architectural network within an IT project). The digital model is a virtual replica of the physical system that provides full dynamic info. The data-exchange process induces synchronisation between the physical and virtual domain, thus allowing to provide real-time feedbacks and adaptation. Together, these elements allow DTS to replicate, mimic, and forecast the behaviour of intricate systems in a controlled digital space.
Emerging studies explore self-optimising AI-driven DTs, though their efficacy in evolving project environments, particularly in dynamic workflows, requires deeper investigation (Wong et al., 2024). Figure 2 highlights enabling technologies driving DTT adoption, including AI, IoT, cloud computing, and big data analytics
Figure 2: Key Enabling Technologies of Digital Twins
IT Project Management Lifecycle
The IT project management lifecycle typically follows adaptive phases such as initiation, planning, and execution, which align with evolving project requirements, especially in Agile environments where iterative processes aim to optimise resources and reduce risk (Perera and Eadie, 2023). In contrast to traditional project management models that presuppose the use of a discontinuous, step-by-step course of action, Agile approach is more inclined toward the inception and accomplishment of undertakings in iteractive modes inspired by the ongoing response-delivery systems (Barros et al., 2024).
Predictive Analytics and Scenario-Based Testing
Scenario-based testing is a methodological approach that allows software development teams to understand and assess alternative projects configuration in iterative steps, thus supporting the choice of the most effective and efficient one. This is a main constituent of predictive analytics, a field that applies past or current information to identify future dangers early and supports quick, deliberate decisions, particularly in the Agile development framework (Pantovic et al., 2024). In its turn, predictive models will enable project teams to discover and eliminate mistakes in their initial forms, thus reducing the risk of wider repercussions. The use of digital twin technology in the field of IT is thus justified due to the ability of the simulated project environments to enhance management of the tasks and due to the ability of the technology to enhance performance monitoring.
Digital twins in information technology (IT) project lifecycles are discussed along three key theoretical analytical frameworks: complexity theory, decision support system (DSS) and Agile project management practices. An overview of existing literature indicates that all these views shed some light on the interdependent mechanisms of requirements identification, evolution, and validation. Complexity theory makes clear that technological systems exhibit behavior that is both emergent and non-linear; DSS theory defines processes of enabling rational decision-making in situations of uncertainty; and Agile project management practices define iterative and incremental work flows that are aimed at timely deployment of business value.
According to Schutzko and Timinger (2023), the complexity theory explores the inter-dependency of the parts of an IT project and how they can trigger unanticipated problems. They mention the case of joint usage of GitHub and Azure DevOps as an example, and indicate that such collaborative use can create challenges (Mohamed & Al-Jaroodi, 2024). Digital Twin Technology (DTT) mitigates these issues by creating virtualized depictions of the work flows of the project there by making planning much more thought-driven and schedule slips to be eliminated (Guinea-Cabrera and Holgado-Terriza, 2024; Windapo et al., 2023). Olsson and Axelsson (2023) also suggest that digital twins could reduce cognitive overload since they are used to explain how Jira, Git, and DevOps pipelines work together. DTT allows teams to find and fix the problems which usually arise in multidimensional, multi-tool operation in project contexts, such as version-control conflicts, and automated-testing delays, which often sink complex projects (Dalibor et al., 2022; Perera and Eadie, 2023).
Traditional Decision Support Systems (DSS) are impractical because they rely on the past, thus making them inefficient in responding swiftly to the fast-paced initiatives under regular assessment (Ali et al., 2024; Oettl et al., 2023). Digital Twin Systems (DTS) overcome this shortcoming, through providing real-time updates, allowing scenario testing, and predicting any risks, thus avoiding delays in project execution and excessive resource waste (Attaran and Celik, 2023; Tao et al., 2019). Furthermore, Van der Valk et al. (2020) note that DTS combines feedback systems that facilitate flexibility during project life cycle. Rasheed et al. (2020) argue that when DTS are implemented through the combination of simulation models or artificial intelligence, they increase the flexibility of DSS and make them particularly attractive to make fast and well-informed decisions within information-technology (IT) environments. According to Mohamed and Al-Jaroodi (2024), DTS increases the accuracy of predicting upgrades in the cloud system.
The agile approaches focus on iterative development, collaborative work, and flexibility, the mechanisms that optimise continuous improvement by using iterative feedback (Biesialska et al., 2021). Bravo and Vieira (2023) claim that Digital Twin Technology (DTT) incorporated into Agile workflow gives teams the ability to react swiftly, minimize the risks, and optimize sprint planning. DTT also enhances the Agile processes through the enhancement of predictive simulations included in every planning cycle. In combination, these views can provide a framework to use DTT in the IT project management process, allowing teams to make more informed choices, and boost the efficiency of operation thus getting early warnings of any emerging issue.
The existing research on Digital Twins in IT project management is related to Applications of Agile frameworks, Project phasing, System design principles, and Predictive analytics. However, a number of the limitations exist: Frameworks are still rather abstract, their adherence to the Agile principles has not been genuinely established yet, and overall empirical uses of real data showing successful application of the frameworks are scarce.
There are two perspectives that have become predominant in the scholarly discourse on Digital Twin Technology (DTT). On one hand, other authors like Iliuţă et al. (2024) and Madni et al. (2019) consider DTT as a paradigmatic innovation that uses AI and real-time information to create stable digital spaces. On the other hand, Rasheed et al. (2020) present DTT as a small incremental change, stating that the field of its application is limited to individual applications, i.e., simulations and feedback mechanisms. This contradiction is especially evident in the sphere of information technologies, where the flexibility of the system and its constant update are the key factors. Despite the fact that other disciplines can be used to contribute to DTT research, much of these work does not incorporate rigorous empirical testing in IT conditions. As in the case of Qi et al. (2021), who consider DTT growth via cloud and AI infrastructure but fail to conduct essential analysis of the constraints related to scalability associated with IT environments. Mohamed and Al-Jaroodi (2024) consider DTT in the context of cloud migration but they focus on centralised architectures unable to reflect the decentralised, team-based notion promoted by the Agile practice. Kober et al. (2024) agree that DTT is an under-researched methodology of IT projects and demand to conduct interdisciplinary, cross-sectoral studies to clarify the broader possibilities of the technology.
In the IT field, developing powerful digital twin (DT) systems involves finding a reasonable balance between refined technologies and the flexibility that Agile approaches inherently offer. Jeong et al. (2022) introduce the concept of layered architecture which involves edge computing and joint data repositories as a means of connection between physical and logical objects. However, the model is based on centralised governance, which is the opposite of the Agile, more decentralised values of the team operating process. Security considerations also have to be taken into account. Khan et al. (2022) conclude that cloud-based DT infrastructures are vulnerable to cyber-attacks and promote the idea of implementing blockchain as a protective layer, and Kherbache et al. (2022) warn about the risk of industrial data to hackers. In addition, both contributions fail to address the findings outlined by Singh and Tripathi (2024), according to which substantial share of stakeholders oppose the adoption of DTs due to the perceived loss in control. There are also aspects of scalability that make implementation difficult. According to Schranz et al. (2020), SBMEs face the challenge of using DTs due to heavy calculations, whereas Guerra-Zubiaga et al. (2021) argue that the fast pace of Agile puts an excessive burden on centralised infrastructures that have to coordinate DT-based activities. To solve this, Olsson and Axelsson (2023) suggest using modular and decentralised designs that fit with DevOps. Singh, Weeber, and Birke (2021) add a practical “toolbox” of modular techniques to make DTs easier to apply and scale in Agile projects.
Standardised frameworks for IT-specific DTs remain nascent compared to manufacturing sectors. Aheleroff et al. (2021) advance the Digital Twin-as-a-Service model for cloud-based interoperability. Yet, their manufacturing focus neglects IT needs for artefact-centric simulations (e.g., codebase dependencies, project bottlenecks). Wong et al. (2024) address this with a “project twin” prototype that mirrors software artefacts through Agile workflow mapping. While promising, Windapo et al. (2023) critique its scalability in enterprise environments, citing fragmented data governance in hybrid clouds a limitation corroborated by Wang et al. (2022).
Lifecycle integration studies reveal fragmented progress. During initiation, Reiche and Timinger (2021) apply predictive scenario testing to simulate dependencies but rely on static models incompatible with Agile’s iterative planning. During execution, Mohamed and Al-Jaroodi (2024) use automation to spot risks in cloud migration and reduce delays, but their methods work mainly in controlled environments, not in real-world settings. For project completion, Tao et al. (2022) suggest using feedback loops like in DevOps, but there is little real evidence of this working in IT. Overall, these studies focus on separate project stages instead of offering a full, connected solution, leading to a lack of integration across the entire lifecycle (Jonkers et al., 2021).
Using Digital Twin Technology (DTT) in IT project management has big benefits, but it also creates challenges, especially with Agile methods that value teamwork and flexibility. Guinea-Cabrera and Holgado-Terriza (2024) introduced “sprint twins” to predict sprint problems, but there is not much real-world proof yet. Singh and Tripathi (2024) found that some Agile teams resist DTT because they fear losing control and flexibility. To solve this, Van der Valk et al. (2020) suggest blending DTT with Agile’s feedback approach. Jeong et al. (2022) also say DTT should assist, not replace, human decisions. Tripathi et al. (2024) add that poor communication, unclear roles, and different stakeholder goals often block DTT success in Agile settings. Perera and Eadie (2023) stress the need for proper training and change management. Bravo and Vieira (2023) support aligning DTT with Agile, but long-term evidence is still lacking. Overall, these studies agree that DTT can improve flexibility, accuracy, and results in IT projects, if the challenges are carefully managed.
This study uses a mix of methods, including a structured review of past research, real-world case study analysis, and the development and testing of a new framework through simulation. It relies on existing research, especially by Dalibor et al. (2022) and Guinea-Cabrera and Holgado-Terriza (2024), to show how combining case studies and literature reviews helps create useful models for new digital technologies. The review was done in a careful and organized way using sources like IEEE Xplore, ScienceDirect, Scopus and Google Scholar. These databases were selected based on their disciplinary relevance and coverage of high-impact research in project management, systems engineering, and digital innovation. The review targeted both conceptual and empirical contributions, specifically focusing on DTT implementation, Agile integration, workflow simulation, and predictive analytics in IT settings.
Ten thematic keyword clusters were developed, using a total of 25 Boolean search combinations such as:
From an initial yield of 133 sources, 103 peer-reviewed scholarly works comprising journal articles, conference papers, and academic books published between 2015 and 2025 were retained after applying rigorous inclusion and exclusion criteria (see Table 2).
Table 1: Inclusion and Exclusion Criteria
|
Criteria |
Inclusion |
Exclusion |
|
Content Focus |
Studies addressing DTT in IT project context, Agile workflows, predictive analytics, and simulation with transferable insights. |
Studies without relevance to IT contexts or with sector-specific focus (e.g., healthcare-only applications). |
|
Article Type/Year |
Peer-reviewed conference papers, journal articles, and academic books published between 2015 and 2025. |
Non-peer-reviewed content, technical reports, or publications predating 2015. |
|
Application to Project Lifecycle |
Research linked to framework design, decision support, or lifecycle optimisation. |
Studies with generalised findings or weak alignment to project environments. |
|
Methodology Relevance |
Empirical, conceptual, or mixed-method studies with detailed implementation discussion. |
Studies lacking methodological clarity or depth. |
The literature review provided the theoretical foundation for the framework and revealed implementation patterns, case benchmarks, and research gaps across sectors.
The methodology adopted a sequential and interconnected structure, as illustrated in Figure 3 below. It begins with a systematic literature review to establish theoretical foundations and identify key research gaps. This is followed by a case study analysis, which validates the theoretical findings by examining practical challenges and opportunities observed in real-world industry settings. The paper provides a new framework that is synthesised based on the evidence of case-studies, as well as the available literature. The framework consists of Agile feedback mechanisms, performance monitoring in real-time, and the statistical-based prediction techniques, and it can be verified using the simulation in order to measure its performance in the Agile-based project setting.
Figure 3: A visual representation of the research design
After a detailed description of the research method, the discussion can now proceed to a practical example of the Digital Twin model. The case study evidences the framework facilitating, predictive, and decision-making role, and improving real-time enhancement of processes via simulation.
In this empirical research, a methodological approach to visualizing a Digital Twin framework in Draw.io and Figma is discussed. draw.io is working as a web-based diagram tool to help explain complex IT processes and workflows. Its customizable interface helps to coordinate group work and simplify the project communication (Ozkaya, 2019). Likewise, Figma is an online designing software that allows compiling inept prototypes of user interface (Kimseng et al., 2023; Rana, 2024). The ability to collaborate in real-time, an instant feedback loop, and constant updates enable the use of Figma as a potent tool to speed up the design process and facilitate collaboration (Stige et al., 2024; John et al., 2025). This research uses Figma as an AI-enable dashboard simulation tool to manage tasks, mitigate risks, and track the progress in real-time. As also suggested by Iumanova et al. (2024), it is revealed in the literature that tools that adopt a flexible functionality, cloud-based architecture, and claim to focus on user experience are the ones most likely to support collaborative teamwork, to which attributes apply to the selected platforms in the present case study.
The given study provides a list of technologies and tools, which allow implementing the Digital Twin Technology (DTT) model in the real-life scenario of managing IT projects, and complements the framework. The choice of these platforms (Table 2) was based on the assumption that they should comply with the Agile practices, be able to handle the data in real time, scale, and align with the best industry practices in terms of workflow automation and forecasting.
Table 2: Suggested Technology and Tools for Real-Live Implementation
|
Framework Module |
Technology/Tool |
Purpose |
Citation |
Licence |
|
Data Acquisition & Integration |
Python (Flask, FastAPI), Node.js |
Used to build RESTful APIs for data ingestion and processing |
Nilsson and Demir (2023); Iumanova et al. (2024) |
Open Source |
|
Apache Kafka, RabbitMQ |
Handles low-latency real-time data streaming. |
Dobbelaere and Esmaili (2017); Fuller et al. (2020) |
Open Source |
|
|
PostgreSQL, MongoDB |
Supports structured and unstructured project monitoring data |
Makris et al. (2021) |
Open Source |
|
|
|
Apache Airflow |
Orchestrate ETL pipelines for data flow management |
(Bussa and Hegde, 2024). |
Open Source |
|
Digital Twin Core |
Python (Django, Flask), Node.js (Express) |
Implement back-end logic for virtual DT model operations. |
Iumanova et al. (2024); Sharma et al. (2024) |
Open Source |
|
Redis, Firebase |
Enables real-time synchronisation of project data. |
Jeong et al. (2022); Moroney (2017); Norem (2024) |
Open Source / Proprietary |
|
|
GraphQL, REST APIs |
Facilitate communication service interfaces between modules. |
Gori Parthi (2024); Nilsson and Demir (2023) |
Open Source |
|
|
Simulation & Predictive Analytics |
Python (Scikit-learn, TensorFlow, PyTorch) |
Develop, train, and deploy predictive models and Machine learning (ML) training pipelines. |
Venkatapathy (2023); Guillaume-Joseph and Wasek (2015) |
Open Source |
|
SimPy, AnyLogic |
Run scenario-based experimentation, including Monte Carlo simulations. |
Peyman et al. (2021); Singh et al. (2021); Walter and Barkema (2015) |
Open Source |
|
|
AI-Driven Decision-Support |
OpenAI API, LangChain, GenKit |
Enable AI-driven recommendations and natural language Processing (NLP) interfaces. |
(Almalki, 2025); Auger and Saroyan (2024) |
Open Source/Paid API |
|
Visualization & UI |
Figma |
Prototype and wireframe UI components |
Kimseng et al. (2023); Rana (2024); Stige et al. (2024) |
Open Source |
|
D3.js, Chart.js, Plotly, React.js, Vue.js |
Visualize metrics, trends, and predictive insights; build dynamic and responsive dashboards |
Iumanova et al. (2024); Singh et al. (2021); Yang (2023) |
Open Source |
|
|
Feedback & Continuous Improvement |
GitHub Actions, Jenkins, GitLab CI/CD, Prometheus, Grafana |
Automate model updates, testing, deployment processes, and monitoring and alerting through Continuous Integration and Deployment (CI/CD) pipelines. |
Attaran and Celik (2023); Botin-Sanabria et al. (2022); Iumanova et al. (2024); Yasin et al., 2021 |
Open Source |
Source: Authors
To uphold the defined KPI thresholds (discussed in Section 4.4), a comprehensive Benchmarks and Testing Strategy ensures ongoing validation and system reliability. Each KPI is assigned a quantifiable goal (e.g., risk detection ≥ 90%) and measured regularly, such as after sprints, major project phases, or monthly reviews, to catch performance issues early. This strategy is closely aligned with Agile practices, embedding validation activities into iterative cycles to support continuous improvement and responsiveness to change.
To assess the operational performance of the DTT framework, an agile simulation scenario was designed to evaluate the Digital Twin Technology (DTT) framework within a typical IT project lifecycle. It included sprint planning, development, testing, and deployment. During execution, the AI engine detected a testing bottleneck based on sprint velocity and task history, triggering an alert to redistribute tasks and adjust the sprint timeline. If applied live, this could have saved up to five working days, showcasing the framework’s potential for proactive governance.
The simulation was visualised through a non-functional prototype dashboard, illustrating metrics like sprint progress, projected timelines, resource heatmaps, and task views. The interface included a feedback panel and was designed for accessibility, featuring role-based views for both technical and non-technical users.
Baseline – Project on Track
In the initial simulation state, the dashboard shows the project as "On Track." All progress indicators were green, with the project forecasted to finish five days ahead of schedule. Sprint progress is at 65%, and all metrics indicate minimal risk. AI-suggested mitigations, such as adding a QA resource and initiating parallel deployment preparation, had already been implemented. Schedule, Resource, and Scope Risks were all at 0%, illustrating the digital twin operating in a steady state, pre-emptively managing risks.
Figure 4: Baseline steady state
Risk Escalation Due to Inaction
As simulated project stressors were introduced and no mitigations applied, the scenario flagged early warnings. The Project status shifted to "Possible Delays" with a projected five-day slip. Resource Risk rose to 90%, and testing and deployment phases began to diverge from the baseline. This scenario demonstrates the twins’ ability to forecast escalating issues based on resource allocation patterns.
Figure 5: Risk escalates with delayed action
Partial Mitigation – Moderate Recovery
When only one AI-recommended mitigation (e.g., adding a QA resource) was activated, Resource Risk dropped to 0%. However, the project timeline improved, although it remained four days behind schedule, and Schedule Risk stayed moderate. This shows that single-point interventions can address specific bottlenecks but do not ensure full recovery.
Combined Mitigation – Strong Recovery
Activating multiple mitigations, such as removing Feature X and enabling parallel deployment, resulted in a strong recovery. Schedule Risk dropped to 30%, and the forecast improved to just three days behind schedule. The scenario highlights the comprehensive benefit of timely, multi-pronged interventions.
Comparative Analysis: Recovery vs. Worst-Case
Simulations confirm that mitigation timing is critical. In the worst-case scenario, where no actions were taken, the project delay reached 12 days, and all risks spiked to 100%. This underlines the digital twin’s ability to forecast divergent paths and the consequences of inaction.
Figure 6: Worst-Case Scenario with no intervention
The simulations confirmed the DTT framework’s ability to predict risks, make informed decisions, and enhance IT project governance. Early detection (A2), timely interventions (A4–A5), and poor outcomes from delays (A7–A8) demonstrated its strategic value. Unlike static dashboards, the digital twin adapts to live data. Unlike regular dashboards, the DTT updates in real time and fits well with Agile methods by helping managers see risks, test options, and plan effectively. The Pilot–Train–Scale method allows step-by-step use, and tools like Jira, MS Project, and Azure DevOps can connect with its flexible parts (simulation, analytics, AI, and visuals). Overall, DTT boosts project clarity, flexibility, and success, making it a useful tool in fast-paced IT environments.
To test how well the DTT framework works, a simulation was run using key performance indicators (KPIs). These included how well it detects risks, how fast the AI responds, how reliable the simulations are, how efficiently resources are used, and how satisfied users are. The results were strong: about 92% of risks were caught, the AI gave helpful suggestions in just three seconds, and users rated their satisfaction 8.5 out of 10. Resource use improved by 15%, and dashboard updates had only a two-second delay. Overall, the results show that the framework can boost teamwork, improve accuracy, and speed up decisions in IT projects.
Table 3: KPI metrics and respective outcomes
|
KPI Metric |
Target Value |
Simulation Result |
Impact |
|
Risk Detection Accuracy |
≥ 90% |
~92% |
Enabled proactive mitigation of sprint risks |
|
Simulation Accuracy |
≥ 85% |
85–95% |
Reliable forecasting of project timelines and outcomes |
|
AI Response Time |
≤ 3 seconds |
< 3 seconds |
Supported real-time Agile adjustments and planning |
|
Data Ingestion Latency |
< 2 seconds |
< 2 seconds |
Maintained up-to-date dashboards using synced Jira data |
|
Resource Efficiency Gain |
+15% |
~15% |
Enhanced productivity and reduced idle time |
|
Stakeholder Satisfaction |
NPS ≥ 50 or Score ≥ 8/10 |
~8.5/10 |
High levels of user approval and interface usability |
|
User Adoption Rate |
≥ 80% within 6–12 months |
Conceptually predicted high |
Scalability and relevance across project domains |
|
System Availability |
≥ 99.5% |
Conceptually achieved |
Demonstrated reliability during iterative Agile cycles |
Source: Authors
These results show that the framework fits well with Agile principles like continuous delivery, openness, and flexibility. Thus, setting the foundation for further exploration in section 5.2, where alignment with the research objectives and questions is explicitly discussed.
The framework met all the objectives. Thus, simulations accurately predicted project bottlenecks using real-time data patterns; the interactive dashboard enhances decision-making under constrained conditions; and AI-recommended mitigation strategies significantly reduced risk and improved delivery.
DTT simulates IT project workflows to improve outcomes
The DTT framework was designed as a modular system integrating data ingestion, simulation engines, AI decision support, and real-time dashboards. Simulation results demonstrated their ability to identify testing bottlenecks, forecast delivery deviations, and suggest task reallocations, resulting in a potential five-day reduction in project duration. The model’s feedback loops worked better than traditional tools by allowing quick updates and improving sprint performance. The current research supports the already existing body of evidence suggesting that digital twins allow better prediction, planning, and continuous adjustment of information technology (IT) projects (Guillaume Joseph and Wasek, 2015; Arin et al., 2023; Wong et al., 2024).
Challenges and benefits from adopting DTT in IT project management
Experimental and simulation studies have enlightened the potential benefits and obstacles that come along with the implementation of Digital Twin (DT). The first advantage is enhanced forecasting, real-time monitoring and facilitation of more informed decision-making. However, application of DTs becomes associated with several hurdles such as data security issues, complexity of configuration operations, and stakeholder objections. The presentation of illustrative case studies of Cisco, Siemens, IBM, and Azure DevOps illustrates that, in some cases, the framework works successfully. Specifically, the introduction of artificial intelligence (AI) allowed teams to make sprint suggestions over a course of three seconds, with the feedback-based technologies concurrently producing positive effects on team trust and facilitating the ongoing learning process. These results are in line with Temitope (2020) who emphasized the impact that online tools have on cooperation in groups and on the performance of projects. The emergence of visualisation tools that are easy to use, allowed an even greater level of model fidelity, interdisciplinary collaboration, and enhanced values of Agile which are transparency and shared responsibility.
Strategies that facilitate the effective integration of DTT into IT project management practices
Effective integration methods discovered in this study range from the modular deployment, the role-based training, adaptive simulators engines, and the integrated UI feedback of the mechanisms. The Pilot Train Scale model was introduced as a realistic way to monitor the performance and step-by-step introduce the digital twin technology. Such evidence concurs with the adaptable tool-based framework introduced by Singh, Weeber, and Birke (2021) to implement digital twins in various project settings. Perera and Eadie (2023) also noted that, in Agile systems, the important change management and training in an Agile environment should undergo stringent evaluation and selection based on functional team roles with new technology. According to Tripathi et al. (2024) there is an overwhelming need to have strong governance and team alignment in digital twin initiatives. Instead of being a fixed framework, the DTT model proved itself as to be a flexible set of tools that promote predictive project management in Agile environments by incorporating cooperation and iterative feedback in its very structure.
This research explored how the implementations of AI, prediction tools and simulators might improve management over IT projects via Digital Twin Technology (DTT). The results show that a properly set up DTT with real-time data and smart predictions can make the difference when it comes to IT project management.
Recommendations
To ensure predictions and system integration is successful in real IT projects, pilot testing is accomplished in genuine operation settings (Semeraro et al., 2021). Further, Van der Valk et al. (2020) suggest that those pilots should include structured sessions where users have to engage with AI-generated recommendations in a way similar to their daily work activities. Also, feedback from users should be collected regularly to guide improvements, and future versions should clearly show the difference between testing the idea and testing how it works in practice
Tools like Jira, AWS CloudWatch, and GitLab CI/CD should be connected in real-time when building a working prototype. To ensure fast and scalable data updates, technologies such as Redis caching, OpenAPI, and Kafka streaming should be used (Nilsson and Demir, 2023). In future versions, interface instructions should be added, and the design should move from tools like Figma to low-code or simulation platforms to fix issues with usability and function. This would allow for full workflow testing, better data handling, and smooth system operation.
Widely accepted tools like Monte Carlo simulations, regression forecasts, or supervised learning models should be clearly used to build and test predictive models (Ragazzini et al., 2024). Cross-functional teams should oversee system setup, rule compliance, and staff training. Important technical issues—such as keeping data up-to-date, managing software licenses, and dealing with API limits—must be handled early (Iumanova et al., 2024). Future research should also explore digital twin systems that work across multiple teams and locations to support decentralized data analysis (Olsson and Axelsson, 2023).
Limitations
Future efforts should focus on empirical validation through pilot projects and controlled testing in real IT environments. Addressing barriers such as tool interoperability, data latency, and user training will be essential for real-world scalability.