mick silver
14th May 2016, 08:24 AM
The Pentagon is building a ‘self-aware’ killer robot army fueled by social mediaSource: Nafeez Ahmed (https://medium.com/insurge-intelligence/the-pentagon-is-building-a-self-aware-killer-robot-army-fueled-by-social-media-bd1b55944298#.bl8ki9muh)
Official US defence and NATO documents confirm that autonomous weapon systems will kill targets, including civilians, based on tweets, blogs and InstagramThis exclusive is published by INSURGE INTELLIGENCE (http://www.medium.com/insurge-intelligence/), a crowd-funded investigative journalism project for the global commonsAn unclassified 2016 Department of Defense (DoD) document, the Human Systems Roadmap Review, reveals that the US military plans to create artificially intelligent (AI) autonomous weapon systems, which will use predictive social media analytics to make decisions on lethal force with minimal human involvement.
Despite official insistence that humans will retain a “meaningful” degree of control over autonomous weapon systems, this and other Pentagon documents dated from 2015 to 2016 confirm that US military planners are already developing technologies designed to enable swarms of “self-aware” interconnected robots to design and execute kill operations against robot-selected targets.
More alarmingly, the documents show that the DoD believes that within just fifteen years, it will be feasible for mission planning, target selection and the deployment of lethal force to be delegated entirely to autonomous weapon systems in air, land and sea. The Pentagon expects AI threat assessments for these autonomous operations to be derived from massive data sets including blogs, websites, and multimedia posts on social media platforms like Twitter, Facebook and Instagram.
The raft of Pentagon documentation flatly contradicts Deputy Defense Secretary Robert Work’s denial that the DoD is planning to develop killer robots.
In a widely reported March conversation (https://www.washingtonpost.com/news/checkpoint/wp/2016/03/30/the-killer-robot-threat-pentagon-examining-how-enemy-nations-could-empower-machines/) with Washington Post columnist David Ignatius, Work said that this may change as rival powers work to create such technologies:
“We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds we will have to make decisions on how we best can compete.”But, he insisted, “We will not delegate lethal authority to a machine to make a decision,” except for “cyber or electronic warfare.”
He lied.
Official US defence and NATO documents dissected by INSURGE intelligence (http://www.medium.com/insurge-intelligence)reveal that Western governments are already planning to develop autonomous weapons systems with the capacity to make decisions on lethal force — and that such systems, in the future, are even expected to make decisions on acceptable levels of “collateral damage.”
Behind public talks, a secret arms raceEfforts to create autonomous robot killers have evolved over the last decade, but have come to a head this year.
A National Defense Industry Association (NDIA) conference on Ground Robotics Capabilities in March hosted government officials and industry leaders confirming that the Pentagon was developing robot teams that would be able to use lethal force without direction from human operators.
In April, government representatives and international NGOs convened at the United Nations in Geneva to discuss the legal and ethical issues surrounding lethal autonomous weapon systems (LAWS).
That month, the UK government launched a parliamentary inquiry into robotics and AI. And earlier in May, the White House Office of Science and Technology announced a series of public workshops on the wide-ranging social and economic implications of AI.
https://cdn-images-1.medium.com/max/800/1*CTgVGX4En9K-klNH0x8GEw.jpeg
Prototype Terminator Bots?Most media outlets have reported the fact that so far, governments have not ruled out the long-term possibility that intelligent robots could be eventually authorized to make decisions to kill human targets autonomously.
But contrary to Robert Work’s claim, active research and development efforts to explore this possibility are already underway. The plans can be gleaned from several unclassified Pentagon documents in the public record (http://www.defenseinnovationmarketplace.mil/) that have gone unnoticed, until now.
Among them is a document released in February 2016 from the Pentagon’s Human Systems Community of Interest (HSCOI).
https://cdn-images-1.medium.com/max/600/1*HZn6otyMKlwLj1mVjYv3IQ.png
The document shows not only that the Pentagon is actively creating lethal autonomous weapon systems, but that a crucial component of the decision-making process for such robotic systems will include complex Big Data models, one of whose inputs will be public social media posts.
Robots that kill ‘like people’The HSCOI is a little-known multi-agency research and development network seeded by the Office of the Secretary of Defense (OSD), which acts as a central hub for a huge plethora of science and technology work across US military and intelligence agencies.
The document is a 53-page presentation prepared by HSCOI chair, Dr. John Tangney, who is Director of the Office of Naval Research’s Human and Bioengineered Systems Division. Titled Human Systems Roadmap Review, the slides were presented at the NDIA’s Human Systems Conference in February.
The document says that one of the five “building blocks” of the Human Systems program is to “Network-enable, autonomous weapons hardened to operate in a future Cyber/EW [electronic warfare] Environment.” This would allow for “cooperative weapon concepts in communications-denied environments.”
But then the document goes further, identifying a “focus areas” for science and technology development as “Autonomous Weapons: Systems that can take action, when needed”, along with “Architectures for Autonomous Agents and Synthetic Teammates.”
The final objective is the establishment of “autonomous control of multiple unmanned systems for military operations.”
https://cdn-images-1.medium.com/max/800/1*HJHzUlt3nRFju-9PGh3NhA.png
Such autonomous systems must be capable of selecting and engaging targets by themselves — with human “control” drastically minimized to affirming that the operation remains within the parameters of the Commander’s “intent.”
The document explicitly asserts that these new autonomous weapon systems should be able to respond to threats without human involvement, but in a way that simulates human behavior and cognition.
The DoD’s HSCOI program must “bridge the gap between high fidelity simulations of human cognition in laboratory tasks and complex, dynamic environments.”
Referring to the “Mechanisms of Cognitive Processing” of autonomous systems, the document highlights the need for:
“More robust, valid, and integrated mechanisms that enable constructive agents that truly think and act like people.”
https://cdn-images-1.medium.com/max/600/1*OwJfRx89FDWFSqJ7kkVpYg.png
The Pentagon’s ultimate goal is to develop “Autonomous control of multiple weapon systems with fewer personnel” as a “force multiplier.”
The new systems must display “highly reliable autonomous cooperative behavior” to allow “agile and robust mission effectiveness across a wide range of situations, and with the many ambiguities associated with the ‘fog of war.’”
https://cdn-images-1.medium.com/max/800/1*SlOUumXBBTrvAXN78rxnGg.png
Resurrecting the human terrainThe HSCOI consists of senior officials from the US Army, Navy, Marine Corps, Air Force, Defense Advanced Research Projects Agency (DARPA); and is overseen by the Assistant Secretary of Defense for Research & Engineering and the Assistant Secretary of Defense for Health Affairs.
HSCOI’s work goes well beyond simply creating autonomous weapons systems. An integral part of this is simultaneously advancing human-machine interfaces and predictive analytics.
The latter includes what a HSCOI brochure for the technology industry, ‘Challenges, Opportunities and Future Efforts’, describes as creating “models for socially-based threat prediction” as part of “human activity ISR.”
This is short-hand for intelligence, surveillance and reconnaissance of a population in an ‘area of interest’, by collecting and analyzing data (http://trajectorymagazine.com/civil/item/1369-human-domain-analytics.html) on the behaviors, culture, social structure, networks, relationships, motivation, intent, vulnerabilities, and capabilities of a human group.
The idea, according to the brochure, is to bring together open source data from a wide spectrum, including social media sources, in a single analytical interface that can “display knowledge of beliefs, attitudes and norms that motivate in uncertain environments; use that knowledge to construct courses of action to achieve Commander’s intent and minimize unintended consequences; [and] construct models to allow accurate forecasts of predicted events.”
The Human Systems Roadmap Review document from February 2016 shows that this area of development is a legacy of the Pentagon’s controversial “human terrain” program.
The Human Terrain System (HTS) was a US Army Training and Doctrine Command (TRADOC) program established in 2006, which embedded social scientists in the field to augment counterinsurgency operations in theaters like Iraq and Afghanistan.
The idea was to use social scientists and cultural anthropologists to provide the US military actionable insight into local populations to facilitate operations — in other words, to weaponize social science.
The $725 million program was shut down (http://www.counterpunch.org/2015/06/29/the-rise-and-fall-of-the-human-terrain-system/) in September 2014 in the wake of growing controversy over its sheer incompetence.
The HSCOI program that replaces it includes social sciences but the greater emphasis is now on combining them with predictive computational models based on Big Data. The brochure puts the projected budget for the new human systems project at $450 million.
The Pentagon’s Human Systems Roadmap Review demonstrates that far from being eliminated, the HTS paradigm has been upgraded as part of a wider multi-agency program that involves integrating Big Data analytics with human-machine interfaces, and ultimately autonomous weapon systems.
The new science of social media crystal ball gazingThe 2016 human systems roadmap explains that the Pentagon’s “vision” is to use “effective engagement with the dynamic human terrain to make better courses of action and predict human responses to our actions” based on “predictive analytics for multi-source data.”
https://cdn-images-1.medium.com/max/800/1*AjLt-zgotDUDltylWvedPQ.png
Are those ‘soldiers’ in the photo human… or are they really humanoid (killer) robots?In a slide entitled, ‘Exploiting Social Data, Dominating Human Terrain, Effective Engagement,’ the document provides further detail on the Pentagon’s goals:
“Effectively evaluate/engage social influence groups in the op-environment to understand and exploit support, threats, and vulnerabilities throughout the conflict space. Master the new information environment with capability to exploit new data sources rapidly.”The Pentagon wants to draw on massive repositories of open source data that can support “predictive, autonomous analytics to forecast and mitigate human threats and events.”
https://cdn-images-1.medium.com/max/600/1*b4ca1vWugCBnuVJ7FqGBhQ.png
This means not just developing “behavioral models that reveal sociocultural uncertainty and mission risk”, but creating “forecast models for novel threats and critical events with 48–72 hour timeframes”, and even establishing technology that will use such data to “provide real-time situation awareness.”
According to the document, “full spectrum social media analysis” is to play a huge role in this modeling, to support “I/W , information operations, and strategic communications.”
This is broken down further into three core areas:
“Media predictive analytics; Content-based text and video retrieval; Social media exploitation for intel.”The document refers to the use of social media data to forecast future threats and, on this basis, automatically develop recommendations for a “course of action” (CoA).
Under the title ‘Weak Signal Analysis & Social Network Analysis for Threat Forecasting’, the Pentagon highlights the need to:
“Develop real-time understanding of uncertain context with low-cost tools that are easy to train, reduce analyst workload, and inform COA [course of action] selection/analysis.”In other words, the human input into the development of course of action “selection/analysis” must be increasingly reduced, and replaced with automated predictive analytical models that draw extensively on social media data.
This can even be used to inform soldiers of real-time threats using augmented reality during operations. The document refers to “Social Media Fusion to alert tactical edge Soldiers” and “Person of Interest recognition and associated relations.”
The idea is to identify potential targets — ‘persons of interest’ — and their networks, in real-time, using social media data as ‘intelligence.’
Meaningful human control without humansBoth the US and British governments are therefore rapidly attempting to redefine “human control” and “human intent” in the context of autonomous systems.
Among the problems that emerged at the UN meetings in April is the tendency to dilute the parameters that would allow describing an autonomous weapon system as being tied to “meaningful” human control.
A separate Pentagon document dated March 2016 — a set of presentation slides for that month’s IEEE Conference on Cognitive Methods in Situation Awareness & Decision Support — insists that DoD policy is to ensure that autonomous systems ultimately operate under human supervision:
[I]“[The] main benefits of autonomous capabilities are to extend and complement human performance, not necessarily provide a direct replacement of humans.”Unfortunately, there is a ‘but’.
The March document, Autonomous Horizons: System Autonomy in the Air Force, was authored by Dr. Greg Zacharias, Chief Scientist of the US Air Force. The IEEE conference where it was presented was sponsored by two leading government defense contractors, Lockheed Martin and United Technologies Corporation, among other patrons.
Further passages of the document are revealing:
“Autonomous decisions can lead to high-regret actions, especially in uncertain environments.”In particular, the document observes:
“Some DoD activity, such as force application, will occur in complex, unpredictable, and contested environments. Risk is high.”The solution, supposedly, is to design machines that basically think, learn and problem solve like humans. An autonomous AI system should “be congruent with the way humans parse the problem” and driven by “aiding/automation knowledge management processes along lines of the way humans solve problem [sic].”
A section titled ‘AFRL [Air Force Research Laboratory] Roadmap for Autonomy’ thus demonstrates how by 2020, the US Air Force envisages “Machine-Assisted Ops compressing the kill chain.” The bottom of the slide reads:
“Decisions at the Speed of Computing.”This two-staged “kill chain” is broken down as follows: firstly, “Defensive system mgr [manager] IDs threats & recommends actions”; secondly, “Intelligence analytic system fuses INT [intelligence] data & cues analyst of threats.”
Official US defence and NATO documents confirm that autonomous weapon systems will kill targets, including civilians, based on tweets, blogs and InstagramThis exclusive is published by INSURGE INTELLIGENCE (http://www.medium.com/insurge-intelligence/), a crowd-funded investigative journalism project for the global commonsAn unclassified 2016 Department of Defense (DoD) document, the Human Systems Roadmap Review, reveals that the US military plans to create artificially intelligent (AI) autonomous weapon systems, which will use predictive social media analytics to make decisions on lethal force with minimal human involvement.
Despite official insistence that humans will retain a “meaningful” degree of control over autonomous weapon systems, this and other Pentagon documents dated from 2015 to 2016 confirm that US military planners are already developing technologies designed to enable swarms of “self-aware” interconnected robots to design and execute kill operations against robot-selected targets.
More alarmingly, the documents show that the DoD believes that within just fifteen years, it will be feasible for mission planning, target selection and the deployment of lethal force to be delegated entirely to autonomous weapon systems in air, land and sea. The Pentagon expects AI threat assessments for these autonomous operations to be derived from massive data sets including blogs, websites, and multimedia posts on social media platforms like Twitter, Facebook and Instagram.
The raft of Pentagon documentation flatly contradicts Deputy Defense Secretary Robert Work’s denial that the DoD is planning to develop killer robots.
In a widely reported March conversation (https://www.washingtonpost.com/news/checkpoint/wp/2016/03/30/the-killer-robot-threat-pentagon-examining-how-enemy-nations-could-empower-machines/) with Washington Post columnist David Ignatius, Work said that this may change as rival powers work to create such technologies:
“We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds we will have to make decisions on how we best can compete.”But, he insisted, “We will not delegate lethal authority to a machine to make a decision,” except for “cyber or electronic warfare.”
He lied.
Official US defence and NATO documents dissected by INSURGE intelligence (http://www.medium.com/insurge-intelligence)reveal that Western governments are already planning to develop autonomous weapons systems with the capacity to make decisions on lethal force — and that such systems, in the future, are even expected to make decisions on acceptable levels of “collateral damage.”
Behind public talks, a secret arms raceEfforts to create autonomous robot killers have evolved over the last decade, but have come to a head this year.
A National Defense Industry Association (NDIA) conference on Ground Robotics Capabilities in March hosted government officials and industry leaders confirming that the Pentagon was developing robot teams that would be able to use lethal force without direction from human operators.
In April, government representatives and international NGOs convened at the United Nations in Geneva to discuss the legal and ethical issues surrounding lethal autonomous weapon systems (LAWS).
That month, the UK government launched a parliamentary inquiry into robotics and AI. And earlier in May, the White House Office of Science and Technology announced a series of public workshops on the wide-ranging social and economic implications of AI.
https://cdn-images-1.medium.com/max/800/1*CTgVGX4En9K-klNH0x8GEw.jpeg
Prototype Terminator Bots?Most media outlets have reported the fact that so far, governments have not ruled out the long-term possibility that intelligent robots could be eventually authorized to make decisions to kill human targets autonomously.
But contrary to Robert Work’s claim, active research and development efforts to explore this possibility are already underway. The plans can be gleaned from several unclassified Pentagon documents in the public record (http://www.defenseinnovationmarketplace.mil/) that have gone unnoticed, until now.
Among them is a document released in February 2016 from the Pentagon’s Human Systems Community of Interest (HSCOI).
https://cdn-images-1.medium.com/max/600/1*HZn6otyMKlwLj1mVjYv3IQ.png
The document shows not only that the Pentagon is actively creating lethal autonomous weapon systems, but that a crucial component of the decision-making process for such robotic systems will include complex Big Data models, one of whose inputs will be public social media posts.
Robots that kill ‘like people’The HSCOI is a little-known multi-agency research and development network seeded by the Office of the Secretary of Defense (OSD), which acts as a central hub for a huge plethora of science and technology work across US military and intelligence agencies.
The document is a 53-page presentation prepared by HSCOI chair, Dr. John Tangney, who is Director of the Office of Naval Research’s Human and Bioengineered Systems Division. Titled Human Systems Roadmap Review, the slides were presented at the NDIA’s Human Systems Conference in February.
The document says that one of the five “building blocks” of the Human Systems program is to “Network-enable, autonomous weapons hardened to operate in a future Cyber/EW [electronic warfare] Environment.” This would allow for “cooperative weapon concepts in communications-denied environments.”
But then the document goes further, identifying a “focus areas” for science and technology development as “Autonomous Weapons: Systems that can take action, when needed”, along with “Architectures for Autonomous Agents and Synthetic Teammates.”
The final objective is the establishment of “autonomous control of multiple unmanned systems for military operations.”
https://cdn-images-1.medium.com/max/800/1*HJHzUlt3nRFju-9PGh3NhA.png
Such autonomous systems must be capable of selecting and engaging targets by themselves — with human “control” drastically minimized to affirming that the operation remains within the parameters of the Commander’s “intent.”
The document explicitly asserts that these new autonomous weapon systems should be able to respond to threats without human involvement, but in a way that simulates human behavior and cognition.
The DoD’s HSCOI program must “bridge the gap between high fidelity simulations of human cognition in laboratory tasks and complex, dynamic environments.”
Referring to the “Mechanisms of Cognitive Processing” of autonomous systems, the document highlights the need for:
“More robust, valid, and integrated mechanisms that enable constructive agents that truly think and act like people.”
https://cdn-images-1.medium.com/max/600/1*OwJfRx89FDWFSqJ7kkVpYg.png
The Pentagon’s ultimate goal is to develop “Autonomous control of multiple weapon systems with fewer personnel” as a “force multiplier.”
The new systems must display “highly reliable autonomous cooperative behavior” to allow “agile and robust mission effectiveness across a wide range of situations, and with the many ambiguities associated with the ‘fog of war.’”
https://cdn-images-1.medium.com/max/800/1*SlOUumXBBTrvAXN78rxnGg.png
Resurrecting the human terrainThe HSCOI consists of senior officials from the US Army, Navy, Marine Corps, Air Force, Defense Advanced Research Projects Agency (DARPA); and is overseen by the Assistant Secretary of Defense for Research & Engineering and the Assistant Secretary of Defense for Health Affairs.
HSCOI’s work goes well beyond simply creating autonomous weapons systems. An integral part of this is simultaneously advancing human-machine interfaces and predictive analytics.
The latter includes what a HSCOI brochure for the technology industry, ‘Challenges, Opportunities and Future Efforts’, describes as creating “models for socially-based threat prediction” as part of “human activity ISR.”
This is short-hand for intelligence, surveillance and reconnaissance of a population in an ‘area of interest’, by collecting and analyzing data (http://trajectorymagazine.com/civil/item/1369-human-domain-analytics.html) on the behaviors, culture, social structure, networks, relationships, motivation, intent, vulnerabilities, and capabilities of a human group.
The idea, according to the brochure, is to bring together open source data from a wide spectrum, including social media sources, in a single analytical interface that can “display knowledge of beliefs, attitudes and norms that motivate in uncertain environments; use that knowledge to construct courses of action to achieve Commander’s intent and minimize unintended consequences; [and] construct models to allow accurate forecasts of predicted events.”
The Human Systems Roadmap Review document from February 2016 shows that this area of development is a legacy of the Pentagon’s controversial “human terrain” program.
The Human Terrain System (HTS) was a US Army Training and Doctrine Command (TRADOC) program established in 2006, which embedded social scientists in the field to augment counterinsurgency operations in theaters like Iraq and Afghanistan.
The idea was to use social scientists and cultural anthropologists to provide the US military actionable insight into local populations to facilitate operations — in other words, to weaponize social science.
The $725 million program was shut down (http://www.counterpunch.org/2015/06/29/the-rise-and-fall-of-the-human-terrain-system/) in September 2014 in the wake of growing controversy over its sheer incompetence.
The HSCOI program that replaces it includes social sciences but the greater emphasis is now on combining them with predictive computational models based on Big Data. The brochure puts the projected budget for the new human systems project at $450 million.
The Pentagon’s Human Systems Roadmap Review demonstrates that far from being eliminated, the HTS paradigm has been upgraded as part of a wider multi-agency program that involves integrating Big Data analytics with human-machine interfaces, and ultimately autonomous weapon systems.
The new science of social media crystal ball gazingThe 2016 human systems roadmap explains that the Pentagon’s “vision” is to use “effective engagement with the dynamic human terrain to make better courses of action and predict human responses to our actions” based on “predictive analytics for multi-source data.”
https://cdn-images-1.medium.com/max/800/1*AjLt-zgotDUDltylWvedPQ.png
Are those ‘soldiers’ in the photo human… or are they really humanoid (killer) robots?In a slide entitled, ‘Exploiting Social Data, Dominating Human Terrain, Effective Engagement,’ the document provides further detail on the Pentagon’s goals:
“Effectively evaluate/engage social influence groups in the op-environment to understand and exploit support, threats, and vulnerabilities throughout the conflict space. Master the new information environment with capability to exploit new data sources rapidly.”The Pentagon wants to draw on massive repositories of open source data that can support “predictive, autonomous analytics to forecast and mitigate human threats and events.”
https://cdn-images-1.medium.com/max/600/1*b4ca1vWugCBnuVJ7FqGBhQ.png
This means not just developing “behavioral models that reveal sociocultural uncertainty and mission risk”, but creating “forecast models for novel threats and critical events with 48–72 hour timeframes”, and even establishing technology that will use such data to “provide real-time situation awareness.”
According to the document, “full spectrum social media analysis” is to play a huge role in this modeling, to support “I/W , information operations, and strategic communications.”
This is broken down further into three core areas:
“Media predictive analytics; Content-based text and video retrieval; Social media exploitation for intel.”The document refers to the use of social media data to forecast future threats and, on this basis, automatically develop recommendations for a “course of action” (CoA).
Under the title ‘Weak Signal Analysis & Social Network Analysis for Threat Forecasting’, the Pentagon highlights the need to:
“Develop real-time understanding of uncertain context with low-cost tools that are easy to train, reduce analyst workload, and inform COA [course of action] selection/analysis.”In other words, the human input into the development of course of action “selection/analysis” must be increasingly reduced, and replaced with automated predictive analytical models that draw extensively on social media data.
This can even be used to inform soldiers of real-time threats using augmented reality during operations. The document refers to “Social Media Fusion to alert tactical edge Soldiers” and “Person of Interest recognition and associated relations.”
The idea is to identify potential targets — ‘persons of interest’ — and their networks, in real-time, using social media data as ‘intelligence.’
Meaningful human control without humansBoth the US and British governments are therefore rapidly attempting to redefine “human control” and “human intent” in the context of autonomous systems.
Among the problems that emerged at the UN meetings in April is the tendency to dilute the parameters that would allow describing an autonomous weapon system as being tied to “meaningful” human control.
A separate Pentagon document dated March 2016 — a set of presentation slides for that month’s IEEE Conference on Cognitive Methods in Situation Awareness & Decision Support — insists that DoD policy is to ensure that autonomous systems ultimately operate under human supervision:
[I]“[The] main benefits of autonomous capabilities are to extend and complement human performance, not necessarily provide a direct replacement of humans.”Unfortunately, there is a ‘but’.
The March document, Autonomous Horizons: System Autonomy in the Air Force, was authored by Dr. Greg Zacharias, Chief Scientist of the US Air Force. The IEEE conference where it was presented was sponsored by two leading government defense contractors, Lockheed Martin and United Technologies Corporation, among other patrons.
Further passages of the document are revealing:
“Autonomous decisions can lead to high-regret actions, especially in uncertain environments.”In particular, the document observes:
“Some DoD activity, such as force application, will occur in complex, unpredictable, and contested environments. Risk is high.”The solution, supposedly, is to design machines that basically think, learn and problem solve like humans. An autonomous AI system should “be congruent with the way humans parse the problem” and driven by “aiding/automation knowledge management processes along lines of the way humans solve problem [sic].”
A section titled ‘AFRL [Air Force Research Laboratory] Roadmap for Autonomy’ thus demonstrates how by 2020, the US Air Force envisages “Machine-Assisted Ops compressing the kill chain.” The bottom of the slide reads:
“Decisions at the Speed of Computing.”This two-staged “kill chain” is broken down as follows: firstly, “Defensive system mgr [manager] IDs threats & recommends actions”; secondly, “Intelligence analytic system fuses INT [intelligence] data & cues analyst of threats.”