Read the full edition and subscribe here
Competitive Advantage: Everyone Wants to DARPA
— Pranay Kotasthane
Japan. The UK. Germany. Even the US. These countries are now attempting their own versions of the original Defence Advanced Research Projects Agency (DARPA). The success of DARPA’s 2013 grant to Moderna for using m-RNA to develop vaccines seems to have further fuelled the FOMO.
Will they succeed? Answering that question requires a reimagination of what ‘success’ implies in this context. Most of DARPA’s initiatives fail, by design. Had most initiatives been marketable, it would’ve only meant one thing: the agency wasn’t betting on the groundbreaking ones. Secondly, even the successful ones such as the ARPANET require long gestation periods. In essence, DARPA replicas need to be set up with the apriori acknowledgement — and requirement — that it should fail most of the time and prepare for long periods with zero successes.
That seems to be a difficult act to accomplish. The Economist (June 5, 2021) edition describes a few principles that made DARPA tick:
- An anti-bureaucracy setup. From The Economist:
Whereas most (R&D agencies) focus on basic research, DARPA builds things. Whereas most use peer review and carefully selected measurements of progress, DARPA strips bureaucracy to the bones (the conversation in 1965 which led the agency to give out $1m for the first cross-country computer network, a forerunner to the internet, took just 15 minutes). All work is contracted out. DARPA has a boss, a small number of office directors and fewer than 100 programme managers, hired on fixed short-term contracts, who act in a manner akin to venture capitalists, albeit with the aim of generating specific outcomes rather than private returns.
- Freedom to try and fail. This often means no ministerial oversight and more crucially, a common consensus amongst political actors that such agencies will be given a long rope.
- An assured customer from within the government. Some of US’ own DARPA copies haven’t met similar successes partly because they don’t have an assured customer like the US Department of Defence ready to deploy products of grantees.
Apart from these three elements, there’s another underappreciated factor in my view: a powerful national adversary. DARPA was given the freedoms it got because of the threat the USSR posed. The narrative aspect (democracy vs communism) was no less important in getting scientists onboard on dual-use inventions.
What about India’s chances at replicating DARPA? I would wager that factors #2 and #3 are not difficult for India to manage. There is precedence for India’s national security agencies being left out of parliamentary oversight and financial audits. What’s more difficult is #1. For a government to create an anti-bureaucratic setup that pursues excellence requires immense state capacity of the kind that Indian governments lack.
Of course, Indian governments will have much less money to spare than their western counterparts. But that shouldn’t directly mean fewer risks. It only means that the areas that India chooses to focus on should be different from the ones that the US does. As economists would say, focus on the comparative advantages.
More importantly, India’s revealed preferences show that in its collective imagination, Pakistan was, until now, the most significant adversary. Managing such an adversary didn’t require cutting-edge technology. It just required us to be marginally better than Pakistan. But a much stronger and advanced PRC poses a challenge that requires India to come out strong on all fronts, including technology. Herein lies the impetus for India to be audacious.
Game of Drones #1: Autonomous Weapons Need Babysitting
— Aditya Ramanathan
In July 2020, Elon Musk, the CEO of Tesla, claimed the company would make its first fully autonomous car in a few months. Notwithstanding Musk’s ambitions, driverless vehicles still have key challenges to overcome. These include developing sensors that can operate in difficult environments and artificial intelligence that isn’t easily confused.
Autonomous weapon systems share these challenges with self-driving cars. A new study by Arthur Holland Michel, a researcher at the United Nations Institute for Disarmament Research (UNIDIR), argues that since the conditions of combat are “harsh, dynamic and adversarial,” autonomous systems will be prone to failures of various kinds.
Michel identifies four underlying data issues that increase the risk of failure. One is incomplete data, which can cause an autonomous system to “misclassify objects and activities or fail to recognize its progress towards a given goal.”
The second is low-quality data – essentially raw information that is beset by a low signal to noise ratio.
The third is incorrect or false data, which can arise out of either faulty sensors or acts of deception.
The fourth is discrepant data. These could include “edge cases” or “corner cases” as well as other anomalies that “do not fit neatly within the structured categories that human designers code AI systems to recognize or respond to.”
Much like self-driving cars that struggle to navigate poor weather conditions or fatally strike pedestrians, autonomous weapons are bound to make mistakes. Michel argues these failures are made more likely because of harsh and variable wartime conditions and the actions of adversaries. The persistent fog of war and well-known instances of military deception and concealment through history suggest autonomous weapons will have to go through much further development as well as being tested in the rigours of combat before commanders have the confidence to delegate greater responsibilities to them.