As a part of TechCrunch’s ongoing Women in AI series, which seeks to give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch interviewed Lakshmi Raman, the director of AI at the CIA. We talked about her path to director as well as the CIA’s use of AI, and the balance that needs to be struck between embracing new tech while deploying it responsibly.
Raman has been in intelligence for a long time. She joined the CIA in 2002 as a software developer after earning her bachelor’s degree from the University of Illinois Urbana-Champaign and her master’s degree in computer science from the University of Chicago. Several years later, she moved into management at the agency, eventually going on to lead the CIA’s overall enterprise data science efforts.
Raman says that she was fortunate to have women role models and predecessors as a resource at the CIA, given the intelligence field’s historically male-dominated ranks.
“I still have people who I can look to, who I can ask advice from, and I can approach about what the next level of leadership looks like,” she said. “I think that there are things that every woman has to navigate as they’re navigating their career.”
In her role as director, Raman orchestrates, integrates and drives AI activities across the CIA. “We think that AI is here to support our mission,” she said. “It’s humans and machines together that are at the forefront of our use of AI.”
AI isn’t new to the CIA. The agency has been exploring applications of data science and AI since around 2000, Raman says, particularly in the areas of natural language processing (i.e., analyzing text), computer vision (analyzing images) and video analytics. The CIA tries to stay on top of newer trends, such as generative AI, she added, with a roadmap that’s informed both by industry and academia.
“When we think about the huge amounts of data that we have to consume within the agency, content triage is an area where generative AI can make a difference,” Raman said. “We’re looking at things like search and discovery aid, ideation aid, and helping us to generate counterarguments to help counter analytic bias we might have.”
There’s a sense of urgency within the U.S. intelligence community to deploy any tools that might help the CIA combat growing geopolitical tensions around the world, from threats of terror motivated by the war in Gaza to disinformation campaigns mounted by foreign actors (e.g., China, Russia). Last year, the Special Competitive Studies Project, a high-powered advisory group focused on AI in national security, set a two-year timeline for domestic intelligence services to get beyond experimentation and limited pilot projects to adopt generative AI at scale.
One generative AI-powered tool that the CIA developed, Osiris, is a bit like OpenAI’s ChatGPT, but customized for intelligence use cases. It summarizes data — for now, only unclassified and publicly or commercially available data — and lets analysts dig deeper by asking follow-up questions in plain English.
Osiris is now being used by thousands of analysts not just within the CIA’s walls, but also throughout the 18 U.S. intelligence agencies. Raman wouldn’t reveal whether it was developed in-house or using tech from third-party companies but did say that the CIA has partnerships in place with name-brand vendors.
“We do leverage commercial services,” Raman said, adding that the CIA is also employing AI tools for tasks like translation and alerting analysts during off hours to potentially important developments. “We need to be able to work closely with private industry to be able to help us not only provide the larger services and solutions that you’ve heard of, but even more niche services from non-traditional vendors that you might not already think of.”
A fraught technology
There’s plenty of reason to be skeptical of, and concerned about, the CIA’s use of AI.
In February 2022, Senators Ron Wyden (D-OR) and Martin Heinrich (D-New Mexico) revealed in a public letter that the CIA, despite being generally barred from investigating Americans and American businesses, has a secret, undisclosed data repository that includes information collected about U.S. citizens. And last year, an Office of the Director of National Intelligence report showed that U.S. intelligence agencies, including the CIA, buy data on Americans from data brokers like LexisNexis and Sayari Analytics with little oversight.
Were the CIA to ever use AI to pore over this data, many Americans would most certainly object. It’d be a clear violation of civil liberties and, owing to AI’s limitations, could result in seriously unjust outcomes.
Several studies have shown that predictive crime algorithms from firms like Geolitica are easily skewed by arrest rates and tend to disproportionately flag Black communities. Other studies suggest facial recognition results in a higher rate of misidentification of people of color than of white people.
Besides bias, even the best AI today hallucinates, or invents facts and figures in response to queries. Take Microsoft’s meeting summarization software, for example, which occasionally attributes quotes to nonexistent people. One can imagine how this might become a problem in intelligence work, where accuracy and verifiability are paramount.
Raman was adamant that the CIA not only complies with all U.S. law but also “follows all ethical guidelines” and uses AI “in a way that mitigates bias.”
“I would call it a thoughtful approach [to AI],” she said. “I would say that the approach we’re taking is one where we want our users to understand as much as they can about the AI system that they’re using. Building AI that’s responsible means we need all of the stakeholders to be involved; that means AI developers, that means our privacy and civil liberties office [and so on].”
To Raman’s point, regardless of what an AI system is designed to do, it’s important that the designers of the system make clear the areas where it could fall short. In a recent study, North Carolina State University researchers found that AI tools, including facial recognition and gunshot detection algorithms, were being used by police who weren’t familiar with the technologies or their shortcomings.
In a particularly egregious example of law enforcement AI abuse perhaps borne out of ignorance, the NYPD reportedly once used photos of celebrities, distorted images and sketches to generate facial recognition matches on suspects in cases where surveillance stills yielded no results.
“Any output that’s AI generated should be clearly understood by the users, and that means, obviously, labeling AI-generated content and providing clear explanations of how AI systems work,” Raman said. “Everything we do in the agency, we are adhering to our legal requirements, and we are ensuring that our users and our partners and our stakeholders are aware of all of the relevant laws, regulations and guidelines governing the use of our AI systems, and we are complying with all of these rules.”
This reporter certainly hopes that’s true.
Source : Techcrunch