Lambert here: A snapshot in time rather than a worked theory of the case. The deck reads: “Musk’s loyalists at DOGE have infiltrated dozens of federal agencies, pushed out tens of thousands of workers, and siphoned millions of people’s most sensitive data. The next step: Unleash the AI.” But the article doesn’t really deliver on “Unleash the AI”, unless a chatbot be AI. So, interesting but diffuse. There are so many names and firms, I’m going to index them all, though they may not appear in this excerpt. If you have arrived here because a name is in the index, but the name is not in the excerpt, go to the Wired URL.
Along the way, DOGE also gained access to untold terabytes of data. Trump had given Musk and his operatives carte blanche to tap any unclassified system they pleased. One of their first stops: a database previously breached more than a decade ago by alleged Chinese cyberspies that contained investigative files on tens of millions of US government employees. Other storehouses thrown open to DOGE may have included federal workers’ tax records, biometric data, and private medical histories, such as treatment for drug and alcohol abuse; the cryptographic keys for restricted areas at federal facilities across the country; the personal testimonies of low-income-housing recipients; and granular detail on the locations of particularly vulnerable children.
What did DOGE want with this kind of information? None of it seemed relevant to Musk’s stated aim of identifying waste and fraud, multiple government finance, IT, and security specialists told WIRED. But in treating the US government itself as a giant dataset, the experts said, DOGE could help the Trump administration accomplish another goal: to gather much of what the government knows about a given individual, whether a civil servant or an undocumented immigrant, in one easily searchable place.
WIRED spoke with more than 150 current and former federal employees, experts, and Musk supporters across more than 20 agencies to expose the inner workings of DOGE. Many of these sources requested anonymity to speak candidly about what DOGE has done—and what it might do next.
Among [Steve] Davis’ early recruits was Zsombor (Anthony) Jancso, a San Francisco–based engineer and former Palantir employee in his mid-twenties. After Palantir, Jancso had worked on a project called Accelerate X, which purported to offer “a modern OS for government” with solutions “delivered in days.” His cofounder, an MIT-educated engineer named Jordan Wick, joined DOGE too.
A few weeks after the 2024 election, an online handle associated with Jancso reached out to a group of people who had participated in an AI challenge put on by the US Space Force. The person said they were looking for “hardcore engineers” and instructed applicants to send their GitHub or LinkedIn to @DOGE on X and reply privately with their X handle. (To do all this, they’d need to pay for X premium.) Not long after that, the same handle posted in a group for Palantir alums: “This is a historic opportunity to build an efficient government, and to cut the federal budget by 1/3.”
Luke Farritor, a 23-year-old engineer, quickly joined in the DOGE recruitment effort.
Musk, meanwhile, was spending time at Mar-a-Lago and getting a crash course on American civics, as taught by an array of Washington bureaucrats, venture capitalists, and right-wing shitposters on X. One of Musk’s advisers was Antonio Gracias, a private equity investor and early Tesla backer, who later summarized what they’d learned on a podcast: “A department just basically asks for money from Treasury and they send it out.”
Soon it became clear that DOGE wanted GSA to adopt one product in particular: an AI chatbot that could plug into the agency’s main portal, the Enterprise Data Solution. Such a tool would allow a handful of DOGE technicians to ask questions in plain language and get answers from vast stores of government data. (How this would accord with the GSA’s Internal Data Sharing Policy, which mandates that requests for certain kinds of controlled unclassified information must be approved by supervisors, was unclear.) To DOGE operatives unfamiliar with GSA’s systems, this might have seemed like a quick build—particularly if the team used an off-the-shelf large language model, like Claude or Gemini or Llama, as a starting point.
But the engineers at GSA knew the project DOGE had in mind was far more complex than it seemed. The Enterprise Data Solution is a maze of disparate databases, analytical tools, and machine-learning systems, all with tightly controlled permissions. Creating even a quick chatbot that could tap into these datasets and produce useful answers was anything but trivial. During the Biden administration, employees at TTS had started exploring the possibility of building a simpler chatbot called GSAi, which they hoped would increase productivity by helping people write emails and eventually process contract and procurement data. By the end of Biden’s term, though, there was no GSA chatbot on the horizon.
“Anyone can build a chatbot today; it’s really not that interesting,” a data scientist remarked during a February meeting about GSAi. A version of it—one that wasn’t directly connected to EDS—was set to go live soon. “The interesting part is in the quality. Can we build a high-quality chatbot, one where our domain expertise is being applied?”
To bridge the gap, GSA engineers proposed building what they called a discovery layer, an intermediary designed to decode user queries, identify relevant data sources, and generate precise searches that returned data the AI could interpret. The proposal, pitched to the A-suite—the select group that sources say includes DOGE members like Davis and Hollander—would also give GSA the ability to audit queries and check the quality of the responses. But for that to work, every database would need to be mapped, its columns and metadata described and categorized, ensuring the system understood what data lived where. None of this would happen automatically. It would be a manual, painstaking process.
As the GSA engineers discussed the scope of what needed to happen, according to sources familiar with the events, they seemed deflated. DOGE’s timeline was unrealistic. “This is a multiyear play,” one employee said bluntly in a meeting about the project, “and they think in terms of days and weeks.”
On March 7, DOGE got one of the things it seemed to want most from GSA: a chatbot that could automate work previously done by federal employees. The tool rolled out to some 1,500 employees at GSA, with an agencywide launch planned a week later. An internal memo about the tool touted the “endless” tasks it could help with: “draft emails, create talking points, summarize text, write code.” The memo hinted at the dangers of deploying chatbots at the federal level, warning workers not to “type or paste” internal or personally identifiable information as inputs.
People who used it weren’t impressed. “It’s about as good as an intern,” one GSA employee told WIRED. “Generic and guessable answers.” This version of GSAi almost certainly couldn’t interact with the EDS discovery layer first proposed by engineers. More likely it was just the first step in an iterative approach. As one official said in the February meeting about the project, the first goal could be to “deliver this sort of janky, doesn’t-work-all-the-time chatbot” to pave the way for a “turbo-charged” version down the line.

Add new comment