Background: Digital Twins and Synthetic Personas
Digital Twin Concept:
A digital twin is a virtual replica of a physical entity, continuously synchronized with its real-world counterpart. Originally conceived in engineering contexts (Tudor et al., 2025), the idea was first outlined in the early 2000s as part of product lifecycle management, comprising a physical system, its virtual model, and data links between them (Tudor et al., 2025). NASA later coined the term “digital twin” around 2010 in the aerospace domain to describe this paradigm shift in system monitoring (Tudor et al., 2025). Fundamentally, a true digital twin is personalized, dynamically updated, and predictive (Tudor et al., 2025) – continuously ingesting data from its source and simulating future states or outcomes. This goes beyond static simulations; a twin maintains a two-way information exchange, mirroring the current state and forecasting behavior under various conditions. Early successes in manufacturing attest to their power: for example, virtual models of factory equipment can simulate decisions and optimizations from design to operations, enabling proactive adjustments and performance tuning (Tao & Qi, 2019). Such twins have become an integral tool in industry to anticipate issues and optimize processes in real time (Tao & Qi, 2019). Originally restricted to physical engineering systems, digital twins are now rapidly extending into human and social domains. As Batty (2024) notes, “although digital twins first originated as models of physical systems, they are rapidly being applied to social systems, such as cities.” Indeed, live urban digital twins integrate diverse data (traffic, energy, demographics) to support city planning and policy decisions (Batty, 2024). In medicine, the notion of a human digital twin has emerged: patient-specific virtual profiles updated with health data to predict disease progression or treatment outcomes (Laubenbacher et al., 2024). These examples reflect a paradigm shift – digital twins are no longer confined to machines, but now model complex human-centric systems. However, expanding into such domains raises new challenges, from modeling behavioral complexity to ensuring data privacy and model validity (Batty, 2024; Tudor et al., 2025). Only a fraction of current “digital twin” implementations meet rigorous criteria for bi-directional, predictive fidelity, with many functioning as simpler static models or one-way “digital shadows” (Tudor et al., 2025). It is within this frontier – applying digital twin technology to model human learning and skill development – that our work is situated.
AI Synthetic Personas:
In parallel, AI advances have enabled the creation of synthetic personas – realistic virtual agents or profiles that emulate human-like characteristics, decision patterns, or feedback. Traditionally, personas in design and training contexts are fictional archetypes of users meant to guide development. Now, synthetic personas are being generated through data-driven and AI techniques, often using large language models (LLMs) to imbue them with lifelike attributes and behavior. Recent studies show that modern LLMs can impersonate the writing style and perspectives of specific demographics or user groups (Kaur et al., 2025). By compiling attributes (e.g. age, profession, personality traits) into a prompt, one can prompt an AI to “become” a persona – for instance, an AI acting as a novice customer or an expert mentor – and generate responses accordingly. Kaur et al. (2025) demonstrated this by creating 765 personas with varying personal attributes and testing whether their responses to a financial well-being questionnaire matched those of real survey participants. Their findings revealed that LLM-generated personas do capture some realistic tendencies but also exhibit systematic biases (Kaur et al., 2025). Notably, as more fine-grained details were specified in a persona’s profile, the AI’s answers diverged in certain ways (e.g. synthetic older personas showed consistently lower self-reported financial well-being than the real cohort, indicating a bias in the model’s knowledge or training data). These results underscore both the potential and pitfalls of synthetic personas: they can provide rapid user-like feedback at scale, but must be carefully validated for fidelity and fairness (Kaur et al., 2025).
Another compelling use of AI personas is to simulate individual human respondents – essentially creating a digital stand-in for each real person. Kaiser et al. (2025) present an approach called ASPIRE (Automated Synthetic Persona Interview and Response Engine) that pairs each human subject with a personalized LLM-based agent. Each agent is essentially a “digital twin” of the person built from that person’s demographic and psychographic profile, used to generate survey responses in parallel to the actual person. By comparing the real and AI-generated responses, they assessed how well the digital twin persona could mimic the individual (Kaiser et al., 2025). The study found the synthetic responses were often able to approximate aggregate trends and even matched individual answers above chance level, but with some notable differences: the AI twins tended to be overly optimistic (overestimating positive ratings) and showed much lower variance than real humans (Kaiser et al., 2025). These shortcomings highlight that while current AI personas can capture general patterns, they may smooth out the nuanced variability of human behavior. Nonetheless, this line of research illustrates a transformative idea – AI can create a virtual persona for every user, unlocking opportunities to test interventions or gather feedback without burdening the actual person. In a training context, such synthetic personas might serve as always-available virtual trainees or trainers, stress-testing educational strategies or providing personalized practice for learners.
The SupraWorx Approach: Integrating Digital Twins and Personas for Upskilling
Platform Overview:
SupraWorx is a smart learning and human capital development platform that integrates the above concepts into a unified system for workforce upskilling. At its core is the Upskill Manager, an AI-driven engine that maintains a digital twin for each learner (employee or student) and orchestrates personalized training interventions. The digital twin of a learner in SupraWorx is a composite AI model reflecting that individual’s current competencies, knowledge state, and learning history. This model is continually updated with performance data – such as assessment results, on-the-job metrics, and even behavioral cues – making it a living profile of the person’s skills and gaps. In effect, the platform constructs a virtual alter ego for each user: an AI mirror that knows their strengths, weaknesses, and progress in real time. Such an approach follows the NASEM criteria for human digital twins (personalized and dynamically updated), with the explicit aim of being predictive – forecasting how a learner might respond to certain training or where their skill trajectory is heading (Tudor et al., 2025). By simulating learning processes within these digital twins, the Upskill Manager can anticipate which competencies the user is likely to master with minimal support versus which areas may require intensive intervention, thereby informing proactive upskilling strategies. SupraGraph Competency Modeling: To structure each learner’s twin, SupraWorx employs the SupraGraph model – an integrated competency framework that covers both domain-specific skills and human–AI collaboration abilities. The SupraGraph models define a taxonomy of competencies relevant to the modern workplace, ranging from technical proficiencies to cognitive and social skills needed to work effectively alongside AI systems. Each competency is represented within the digital twin as a parameter or set of indicators. The Upskill Manager continuously performs a “soll-ist” (target vs actual) skill gap analysis for every user: it compares the target proficiency levels required for the user’s current or aspired role (soll) against the user’s current proficiency as evidenced by the twin’s data (ist). This approach aligns with competency-based education principles and is reminiscent of methods in educational research where AI is used to map out individual learning gaps for personalized curriculum design. By automating this process, our platform mirrors what an expert mentor or HR coach would do – but at scale and in real time.
When a gap is identified (say, the digital twin shows the user’s current skill in data analysis is below the target for a data scientist role), the Upskill Manager’s AI springs into action. It consults a knowledge base of training resources and opportunities (courses, projects, mentoring sessions) and recommends a personalized learning pathway to close the gap. Crucially, the recommendation algorithm is informed by the digital twin’s predictions: for example, if the twin’s simulation suggests the user learns better via hands-on projects than via online courses, the pathway might prioritize project-based experience. The twin can simulate different upskilling scenarios (“what if the user takes an advanced Python course vs. what if they shadow a senior analyst?”) and predict the outcomes on the user’s competence development. This predictive simulation draws upon historical data of similar learners as well as AI models of pedagogy – essentially performing A/B testing in silico to find which intervention would most efficiently raise the target skill. Such capability illustrates the power of marrying digital twins with AI in education: the system is not blindly assigning training, but rather model-testing it first on the virtual avatar, much like virtual drug trials on digital patient twins (Tudor et al., 2025). By the time an intervention is recommended to the real user, the platform has some level of confidence (via the twin’s forecast) that it will be effective.
AI Synthetic Personas in Training:
While each learner has their own digital twin model internally, SupraWorx also leverages AI synthetic personas externally as part of the learning experience. These personas serve multiple roles in the ecosystem:
Virtual Mentors: The system can instantiate an AI persona of an expert in a given field to coach the learner. For example, a synthetic persona may be designed to emulate a seasoned project manager providing feedback on a user’s project plan. Powered by LLMs with domain knowledge, these mentor personas engage in dialogue, answer questions, and give advice tailored to the learner’s profile. Because they can be duplicated and customized, every learner could have a personal “mentor bot” available 24/7, with a teaching style adapted to that learner’s preferences (e.g., more Socratic questioning for one user, more step-by-step guidance for another). This concept builds on research in intelligent tutoring systems and conversational agents, but enhanced by the rich context from the learner’s twin. The persona “knows” the learner’s background and can reference past progress or mistakes, making interactions highly contextual. Such AI mentors exemplify how synthetic personas can democratize access to expert guidance, albeit they must be monitored to ensure the advice remains accurate and aligned with pedagogical goals.
Role-Play and Simulation: Another use of synthetic personas in SupraWorx is to create realistic role-play scenarios for soft-skills training. For instance, a learner might practice a sales pitch with a synthetic customer persona. The AI persona, drawing on a large dataset of customer interactions, will simulate a lifelike conversation – asking questions, raising objections, or even displaying emotions (frustration, enthusiasm, hesitation) to mimic a real client. Such training was traditionally done with human role-players or not at all; with AI personas, it becomes scalable and customizable. A shy employee can practice public speaking with an AI “audience” persona that gives non-judgmental feedback. A manager could rehearse a difficult performance review meeting with a synthetic employee persona to improve their communication approach. Early evidence of the feasibility of this approach comes from studies where LLM-generated personas were used to simulate user responses in surveys and showed plausible, though imperfect, realism (Kaur et al., 2025; Kaiser et al., 2025). In our platform, these personas are deployed in real time to interact with learners, allowing learning-by-doing in a safe simulated environment.
Crowdsourcing Feedback: Synthetic personas can also represent aggregated viewpoints of user segments (e.g., an amalgamated persona of a “typical customer demographic”). In SupraWorx, before a learner implements an idea or completes a project, they could pitch it to a panel of AI personas representing different stakeholders. For example, an employee designing a new product could receive feedback from a synthetic “finance officer” persona concerned with cost, a “customer” persona focused on usability, and a “regulator” persona emphasizing compliance. This is inspired by the notion of a “synthetic customer base” from Kaur et al. (2025), where businesses obtain early-stage feedback on ideas by querying AI personas as stand-ins for real customers.
The advantage is speed and breadth: the learner gets multi-faceted critique instantly, which can highlight issues they might not have considered. Of course, we caution that this feedback is only as good as the data underlying the personas – biases in the training data could lead the AI panel to systematically favor or disfavor certain kinds of proposals (Kaur et al., 2025). We mitigate this by updating persona models with real user data over time and by keeping a human in the loop to review critical decisions.
Results and Discussion
The integration of digital twin simulations with AI personas in SupraWorx yields a deeply personalized upskilling experience. Qualitatively, users of the platform benefit from a continuous feedback loop: the Upskill Manager not only delivers training content but also learns from each learner. Every interaction – whether a quiz result, a conversation with a mentor bot, or a project outcome – feeds back into the learner’s twin model. Over time, the twin becomes an increasingly accurate predictor of the learner’s performance under various conditions. This allows the system to fine-tune recommendations (e.g., adjusting the difficulty of suggested tasks or switching the style of mentorship persona) to maximize engagement and growth. Such adaptivity is a key promise of cognitive digital twin approaches in education, where the twin acts as an intermediary of knowledge flow between the learner and the organization’s training resources (cf. concepts in IEEE Future Directions on cognitive digital twins, 2022). In preliminary deployments within our organization, we observed anecdotal improvements in training efficiency – employees reached target competency levels faster when guided by the AI-driven pathways compared to a control group with a one-size-fits-all training program. While these results are early and require rigorous validation, they suggest that a twin-plus-persona approach can make training both faster and more aligned to individual needs. From a research perspective, one significant finding is the feasibility of maintaining high-fidelity digital learner profiles at scale. Traditional adaptive learning systems struggled to keep learner models updated and accurate, especially when skills are complex and multifaceted. Our use of modern AI (including foundation models for language and analytics) enables richer modeling: the system can process unstructured data like project reports or discussion transcripts to update the twin’s view of a learner’s soft skills or domain understanding. This aligns with the observation that the convergence of IoT-like data streams and generative AI makes real-time, nuanced digital twins more attainable than before (Tudor et al., 2025). The twin does not rely solely on test scores; it ingests a mosaic of performance indicators to present a holistic picture. This is important because human skills (especially soft skills) are context-dependent and not easily measured by tests alone. By modeling context (e.g., how a learner behaves under pressure, gleaned from a role-play simulation with an AI persona), the twin achieves a more comprehensive representation of competency. However, our approach also surfaces several challenges and considerations. First, the issue of validation: how do we ensure the digital twin’s predictions or the personas’ behaviors are accurate reflections of reality? In engineering, digital twins are validated against sensor data and physical laws, but for human-centric twins, validation is more nuanced. Tudor et al. (2025) point out that very few studies on human digital twins incorporate rigorous verification, validation, and uncertainty quantification (VVUQ) – only 2 out of 149 reviewed studies did so comprehensively. We tackle this by periodically evaluating the twin’s predictions against real outcomes. For example, if the twin predicts a user will score 90% after a course but the user scores only 70%, that discrepancy triggers an update to the twin’s learning algorithms and flags that aspect for review. We also involve human mentors to audit AI recommendations, especially early in deployment, effectively creating a human-AI loop to calibrate the system. Over time, as the twin becomes more accurate, the reliance on human oversight can be reduced, but never eliminated for critical decisions. We echo the recommendation that human digital twins should incorporate uncertainty measures – our system attaches confidence intervals to its skill estimates and only automates decisions when confidence is high. Second, the synthetic personas must be handled carefully to avoid propagating AI biases or inappropriate behaviors. The work of Kaur et al. (2025) demonstrated how biases can creep in based on age or other attributes in persona responses. In a training context, bias might manifest as an AI mentor consistently giving different quality feedback to users of different backgrounds, if the underlying model has learned biased associations. To mitigate this, we have diversified the training data for our personas and implemented filters to detect potentially biased or stereotypical responses. We also transparently communicate to users that they are interacting with AI and encourage feedback: if a user feels an AI persona’s advice was off-base or biased, the system logs that and retrains on such incidents. Ethically, the deployment of AI personas raises questions of trust and psychological impact – we are careful that users do not become over-reliant on AI advice or feel deceived. The personas are positioned as assistive tools complementing human mentorship, not replacing human judgment. This stance aligns with a broader perspective in the AI ethics community that emphasizes augmentation over replacement (Agrawal et al., 2023; Agrawal et al., 2019).
Finally, we discuss the broader implications for workforce development. By creating a digital twin for each worker, organizations can potentially forecast skill supply and demand internally. For instance, aggregating the twins of all employees might reveal emergent skill gaps company-wide (e.g., an upcoming need for AI literacy) and inform strategic training investments. It also enables a form of “what-if” analysis at the organizational level: What if we retrain 20% of our staff in cybersecurity – how would that improve our resilience? – such questions could be explored by simulating training interventions on the digital workforce twins. This approach could vastly improve how companies navigate the fast-paced skill requirements of the modern economy. The Science perspective by Agrawal et al. (2023) argues that AI has the potential to reduce inequality by empowering lower-skilled workers to perform at higher levels through assistive technology and training. Our work operationalizes this idea: an AI-augmented upskilling system can take an entry-level employee and chart a tailored path for them to acquire high-value skills, effectively accelerating their career progression. If successful at scale, this could help organizations fill talent shortages internally while offering employees continuous growth – a win-win that addresses some challenges of the AI-driven economy (Agrawal et al., 2023). Of course, there is the flip side that AI may also displace certain tasks; however, by focusing on upskilling, we align with the vision that the future workforce will evolve alongside AI, not be made redundant by it. Conclusion We have presented a cohesive narrative of how AI synthetic personas and digital twins are integrated in the SupraWorx platform to drive personalized upskilling, grounded in contemporary research. By referencing peer-reviewed literature, we positioned our approach within the cutting-edge developments in AI and education. Our Upskill Manager, powered by the SupraGraph competency model, exemplifies a new class of cognitive learning systems that maintain a living digital replica of each learner to inform tailored training strategies. Meanwhile, the use of AI-generated personas for mentoring, simulation, and feedback showcases the practical utility of synthetic agents in enhancing human learning experiences. Together, these innovations transform the training process from a static curriculum into a responsive, data-driven dialogue between the learner’s real and virtual selves. This work contributes to the broader discourse on AI in workforce development by demonstrating a concrete implementation that aligns with theoretical promises noted in recent scholarship – from the dynamic modeling of human skills (Tudor et al., 2025) to the augmentation of workers through AI (Agrawal et al., 2023). It also surfaces key challenges of validation and ethics that must be addressed as such systems become more widespread. Going forward, we envision further research to rigorously evaluate learning outcomes with SupraWorx, comparing it against traditional training in controlled studies. We also plan to incorporate more sophisticated human-AI interaction techniques, ensuring that the recommendations and feedback provided by the system are not only algorithmically sound but also pedagogically effective and psychologically supportive. The ultimate goal is to refine this approach such that AI-driven upskilling platforms can be safely and effectively deployed in real organizations and educational institutions. By empowering individuals with personalized AI guidance and simulating their futures before they live them, we edge closer to a paradigm of precision education, analogous to precision medicine’s tailoring of treatment to patients. In the era of rapid technological change, such AI-boosted learning ecosystems may prove critical in keeping the workforce adaptable, knowledgeable, and resilient.
References
Agrawal, A., Gans, J. S., & Goldfarb, A. (2023). Do we want less automation? Science, 381(6654), 155–158. DOI: 10.1126/science.adh9429
Batty, M. (2024). Digital twins in city planning. Nature Computational Science, 4(3), 192–199. DOI: 10.1038/s43588-024-00606-7
Kaiser, C., Kaiser, J., Manewitsch, V., Rau, L., & Schallner, R. (2025). Simulating human opinions with large language models: Opportunities and challenges for personalized survey data modeling. Proceedings of ACM UMAP 2025. (Extended abstract).
Kaur, A., Aird, A., Borman, H., Nicastro, A., Leontjeva, A., Pizzato, L., & Jermyn, D. (2025). Synthetic voices: Evaluating the fidelity of LLM-generated personas in representing people’s financial wellbeing. Proceedings of ACM UMAP 2025. (Short paper).
Tao, F., & Qi, Q. (2019). Make more digital twins. Nature, 573(7775), 490–491. DOI: 10.1038/d41586-019-02849-1
Tudor, B. H., Shargo, R., Gray, G. M., Fierstein, J. L., Kuo, F. H., Johnson, J. T., … & Ahumada, L. M. (2025). A scoping review of human digital twins in healthcare applications and usage patterns. npj Digital Medicine, 8(1), Article 587. DOI: 10.1038/s41746-025-01910-w
SupraTix und SpeakSphere demonstrieren Offline-Übersetzung und KI-Lernplattform auf der Interspeech 2025



