37 C
Ahmedabad
Tuesday, April 16, 2024

No assurance that improved security will result from AI-driven intelligence


<p>It is highly anticipated that artificial intelligence (AI) will enhance the whole range of information gathering and analysis, resulting in timely and accurate decision-making. Although the current generation may not remember it, there was a similar buzz in 1998–1999 when then-CIA Director George Tenet established “In-Q-Tel,” a venture capital business that combined the best elements of the private and public sectors to operate government technology procurement processes. It was originally known as “Peleus,” but “In-Q-Tel” was chosen instead so that it would resemble “Q” from James Bond films.</p>
<p><img decoding=”async” class=”alignnone wp-image-450838″ src=”https://www.theindiaprint.com/wp-content/uploads/2024/02/theindiaprint.com-no-assurance-that-improved-security-will-result-from-ai-driven-intelligence-2024-2-750×500.jpg” alt=”theindiaprint.com no assurance that improved security will result from ai driven intelligence 2024 2″ width=”1079″ height=”719″ title=”No assurance that improved security will result from AI-driven intelligence 3″ srcset=”https://www.theindiaprint.com/wp-content/uploads/2024/02/theindiaprint.com-no-assurance-that-improved-security-will-result-from-ai-driven-intelligence-2024-2-750×500.jpg 750w, https://www.theindiaprint.com/wp-content/uploads/2024/02/theindiaprint.com-no-assurance-that-improved-security-will-result-from-ai-driven-intelligence-2024-2-768×512.jpg 768w, https://www.theindiaprint.com/wp-content/uploads/2024/02/theindiaprint.com-no-assurance-that-improved-security-will-result-from-ai-driven-intelligence-2024-2-150×100.jpg 150w, https://www.theindiaprint.com/wp-content/uploads/2024/02/theindiaprint.com-no-assurance-that-improved-security-will-result-from-ai-driven-intelligence-2024-2.jpg 800w” sizes=”(max-width: 1079px) 100vw, 1079px” /></p>
<p>This was done in response to the huge challenges American intelligence services were facing in organizing and classifying “unstructured data.” Investigative writer Seymour Hersh revealed this issue with the National Security Agency (NSA) in his November 28, 1999, piece “The Intelligence Gap,” which appeared in The New Yorker. The CIA had established a dedicated section to gather and combine intelligence information in order to aid in their search for Osama bin Laden, and this was another goal of In-Q-Tel.</p>
<p>But before 9/11, there were no stories in the public realm about how In-Q-Tel had assisted in this search. However, in the six months after the September 11 terrorist attacks, “applications for In-Q-Tel funding have skyrocketed from about 700 during the operation’s first two-and-a-half years of existence to more than 1,000,” according to a statement released by Computer Network on April 20, 2002.</p>
<p>Given this context, it is important to examine a November 2023 Stanford University paper that quotes former US National Security Council intelligence expert Amy Zegart, who states that artificial intelligence (AI) has the potential to be “incredibly useful for augmenting the abilities of humans… from large amounts of data that humans can’t connect as readily.” For instance, an AI system may be able to reduce the man-hours needed to follow Chinese surface-to-air missiles by carefully examining hundreds of satellite photos, allowing analysts to reflect deeply on China’s objectives.</p>
<p>This is due to the “five mores” (challenges) that different intelligence agencies are now dealing with, according to her: “more threats” from actors operating regardless of location; “more data” that is “drowning” its analysts; “more speed”; “more decision-makers”; and “more competition.” The US is still at risk on the world stage.</p>
<p>Since some of us are not used to the American method of creating opinions in order to make decisions, the third, fourth, and fifth items need more explanation: According to Zegart, once it was discovered that Soviet missiles were present in Cuba during the 1962 Cuban missile crisis, US President John F. Kennedy had 13 days to consider his choices for a response. Thirteen hours after 9/11, President George W. Bush was forced to take action in 2001. The decoding time today could just take thirteen minutes or so.</p>
<p>The US has a fourth “more” in that the White House is not the only place where decisions are made. Congress changes policy, but unlike in other nations, opinion is swayed by the media and 302 million social media users. The fifth is “more competition” as anybody may gather information using a mobile phone. According to a report by France 24 last year, Mnemonic, an NGO located in Berlin that documents violations of human rights in Ukraine, has gathered three million digital data since the Russian invasion.</p>
<p>Zegart further emphasizes the challenges AI has in being adopted for strategic judgments and decision-making. First off, creating “frontier models” is a skill that only a select few major private companies possess. Who would be in charge of its security when it is transformed into government is a concern. Second, who will lessen the dangers involved? Its control over ethics comes in third. She wants “tough questions” regarding human-centered AI in national security to be posed by American academics and others. Could we carry this out in India? In this context, the fourth danger is another one that might have an impact on AI’s ultimate analytical capacity. It asks, “If you contemplate nuclear or financial disaster, how can we reduce such risks? AI excels at adhering to regulations. People are quite adept at breaking the law.</p>
<p>To this, I would add an additional perspective on the use of AI in national security decision-making. Having examined several instances of alleged “intelligence failure,” I have discovered that human error in reaching the right conclusions has hindered ultimate decision-making more than a deficiency of information. How can AI fix that?</p>
<p>Nine prior indicators that, if taken seriously, could have resulted in preventive measures were present for the decision-makers in the 1941 Pearl Harbor attack, which claimed the lives of about 2,400 soldiers and destroyed eight battleships, three cruisers, and 188 aircraft, according to a 1974 study by the Strategic Studies Institute of the Army War College, Pennsylvania. Israeli Prime Minister Golda Meir did not take into account a number of advance indications discovered by the Agranat Commission during the initial phase of the Yom Kippur War in 1973. The identical comment on the October 7 Hamas incident was reported by the New York Times on December 1, 2023.</p>
<p>In Beirut, Lebanon, on October 23, 1983, car bombs claimed the lives of 58 French troops and 241 American marines. It was dismissed as an intelligence failure until 2001, when the US NSA’s 1983 notice surfaced in a civil damages lawsuit in the District Court of Columbia, connecting Iran to the explosion and mentioning Ali Akbar Mohtashamipour, the country’s ambassador to Syria at the time.</p>
<p>Our Army think tank, the Centre for Land Warfare Studies, discovered that the Army’s Research and Analysis Wing, Intelligence Bureau, and Army had given forty-three warnings on Pakistani intentions between June 1998 and May 1999 during the Kargil War. Nevertheless, the National Security Council could not convene until June 8, 1999, a month subsequent to the official notification of the invasion, despite its establishment on November 19, 1998.</p>
<p>The US National Commission criticized US decision-makers for failing to consider earlier signs in the case of the 9/11 attacks. In a similar vein, the Maharashtra government did not guarantee impenetrable coastline monitoring in response to 16 previous intelligence signals before the terror strikes of 26/11.</p>
<p>Where’s the assurance that intelligence products inspired by AI will improve security management under such conditions?</p>


Related Articles

- Advertisement -
- Advertisement -
- Advertisement -
error: Content is protected !!