NEW YORK — Meta Platforms Inc. is rolling out new tracking software on the computers of its U.S.-based employees to capture mouse movements, clicks and keystrokes, using the data to train artificial intelligence models aimed at building autonomous AI agents capable of performing everyday work tasks, according to internal memos obtained by Reuters.

The tool, known as the Model Capability Initiative or MCI, will operate on a curated list of work-related applications and websites. It will also take occasional snapshots of screen content to provide context for the interactions, a staff AI research scientist posted Tuesday in an internal channel for the company’s Meta SuperIntelligence Labs team.
Meta’s push reflects the intensifying race among tech giants to develop more capable AI agents that can navigate computer interfaces like humans — selecting dropdown menus, using keyboard shortcuts and handling multi-step digital workflows. Current models often struggle with these practical interactions despite advances in language understanding.
“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” Meta spokesperson Andy Stone said in a statement. “To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models.”
The initiative forms part of a broader effort rebranded as the Agent Transformation Accelerator, according to a separate memo from Meta CTO Andrew Bosworth. Bosworth told staff the company aims for a future where AI agents primarily handle routine work while humans direct, review and refine their performance. The data collected will help agents learn to identify when human intervention occurs and improve autonomously in subsequent attempts.
Meta emphasized that the tracking data will not be used for employee performance evaluations or any purpose beyond AI model training. The company said safeguards are in place to protect sensitive content, though specifics were not detailed in the memos.
The announcement quickly sparked internal debate and external backlash. Employees expressed concerns about privacy, surveillance and the long-term implications for job security in discussions on internal forums. Some viewed the program as turning workers into unwitting trainers for systems that could eventually automate their roles. Online reactions ranged from accusations of dystopian workplace monitoring to pragmatic acceptance that high-quality interaction data remains scarce for training reliable agents.
Privacy advocates and labor groups raised questions about consent, data minimization and potential misuse. While Meta limits the tool to U.S.-based full-time employees and contingent workers on work devices and approved applications, critics worry about the precedent for broader workplace surveillance in the AI era. Similar tracking tools have drawn scrutiny at other companies, though Meta’s explicit link to training replacement-level agents has amplified the reaction.
The move comes as Meta ramps up its massive AI investments. The company plans to spend roughly $140 billion on AI infrastructure and related efforts in 2026, nearly double the previous year’s outlay. CEO Mark Zuckerberg has repeatedly positioned AI as central to the company’s future, from improving content recommendations on Facebook and Instagram to developing advanced agents that could transform productivity tools.
Building effective computer-using agents requires vast amounts of real-world demonstration data showing not just what actions to take but the precise sequences of mouse clicks, keystrokes and navigation decisions humans make. Public web data or synthetic examples often fall short in replicating the nuances of enterprise software, internal tools and dynamic interfaces. By harvesting anonymized interaction data from its own workforce, Meta aims to close that gap without relying solely on expensive human annotation or simulated environments.
Industry experts note that Meta is not alone in pursuing this approach. Several tech firms and AI startups are exploring ways to capture human-computer interaction data, either through voluntary contributions, synthetic generation or controlled monitoring. However, Meta’s scale — with tens of thousands of U.S. employees using diverse internal systems — offers a rich, varied dataset that could accelerate progress.
The timing coincides with Meta’s aggressive hiring in AI research while simultaneously managing efficiency initiatives across other parts of the business. Reports have circulated about potential layoffs in non-AI divisions, adding to employee anxiety that the tracking program could contribute to workforce reductions as agents mature.
Meta has a history of heavy internal data collection for product improvement, from user behavior on its social platforms to developer interactions with its tools. The company maintains strict policies on data handling and has faced past regulatory scrutiny over privacy practices, leading to billions in fines and settlements. Officials insist the new tool includes protections against capturing or retaining personal or highly sensitive information.
Still, the rollout highlights tensions in the AI development race. On one side, the need for high-fidelity training data to create genuinely useful agents; on the other, growing societal and employee discomfort with pervasive monitoring. European privacy regulations such as GDPR impose stricter limits on workplace surveillance, potentially complicating similar initiatives for Meta’s international staff.
As AI agents evolve, their ability to autonomously handle tasks like scheduling, data entry, report generation or customer support workflows could reshape white-collar work. Meta’s internal memos frame the effort positively as empowering employees to focus on higher-value work by offloading routine activities. Critics counter that it risks accelerating job displacement without adequate transition support.
The program’s effectiveness will depend on the quality and diversity of the captured data. Mouse trajectories, click patterns and keystroke dynamics provide rich signals about intent, hesitation and workflow efficiency that text-based logs alone cannot convey. Occasional screen snapshots add crucial context, such as the layout of specific applications or the content being manipulated.
Meta has not disclosed technical details about data storage, anonymization techniques or deletion policies. Employees were informed of the rollout but it remains unclear whether participation is mandatory or if opt-out options exist for certain roles.
The development underscores how Big Tech companies are increasingly turning inward for AI training resources as external data sources face legal challenges, quality issues or saturation. Similar efforts have included using customer service transcripts, code repositories and internal documents, but granular interaction data represents a newer frontier.
For now, the Model Capability Initiative is limited to U.S. employees and specific applications. Its success could influence whether Meta expands the approach or inspires competitors to follow suit. As the technology industry grapples with the dual challenges of advancing AI capabilities and addressing ethical concerns around labor and privacy, Meta’s experiment will be closely watched.
Company leaders have signaled confidence that transparent communication and strict boundaries will alleviate concerns. Whether the initiative ultimately boosts AI performance enough to justify the surveillance tradeoff remains an open question that will likely be tested in the coming months as agents trained on the new data enter internal testing.
In the broader context of 2026’s AI boom, Meta’s decision reflects a pragmatic — if controversial — step toward solving one of the field’s persistent bottlenecks: teaching machines not just what to do, but exactly how humans do it in the messy reality of daily digital work.

