Misinformation is a sociotechnical security problem that requires the involvement of industry, academia, media, community, government, and InfoSec industry. It is a security problem that businesses need to care about and design into their products. The cognitive security information sharing and analysis organization is being established to share threat indicators and develop schemas for Styx and taxi.
- Misinformation is a sociotechnical security problem that can have physical and financial consequences for businesses
- It requires the involvement of industry, academia, media, community, government, and InfoSec industry
- Designing products with security in mind is crucial
- The cognitive security information sharing and analysis organization is being established to share threat indicators and develop schemas for Styx and taxi
The speaker mentioned that people may become skeptical of every source of information, which could lead to a situation where there is no shared knowledge, undermining the foundations of freedom and democracy.
Online propaganda and election influence have received much attention recently. Defense leaders and the general public are aware of the risks, but have little understanding of the techniques involved, nor relevant technical details of social networks. The cybersecurity community has a ready understanding of the protocols, services, etc. of a tech platform, but less understanding of how these networks affect society and politics collectively. There is now a thriving field of computational social science that studies how social networks and other digital media affect society, but the field does not generally address the topic as a security question—let alone a defense problem. Here we will present an integrated view of Socio-Technical Systems (STS) amenable to application of security principles. Applying this novel approach requires unification of three skill sets: data science, psychology, and security thinking, which are not commonly found together. An STS consists of a social network, the human population that uses it, and an output system where effects are targeted, such as a political system or economic market. By combining analytical techniques from political or market economics, voting theory, etc. with the ML models that run tech platforms, an entire STS could be modeled as a single system. While tech platforms are already skilled at predicting individual behaviors for marketing purposes, enhanced models could enable the development of improved monitoring systems for hostile campaigns of political or financial influence. We will also provide examples of what a red team/blue team process could look like in the context of STS security, and walk through some examples of red-team analysis of influence operations.