Organizations of all sizes grapple with the daunting reality of potential vulnerabilities, malicious actors, and unforeseen challenges that threaten the integrity of their company. The stakes have never been higher; from small startups to multinational corporations, every entity must navigate an intricate web of security challenges daily. While the terms—’risk’ and ‘threat’—are often intertwined in discussions about security, their distinctions are crucial. But what exactly are the differences in these terms, and why is it necessary to distinguish them? This piece will delve into these definitions, identify top risks and associated threats, and evaluate the strategic implications of adopting risk-centric versus threat-centric approaches to cybersecurity strategy.

Defining Cyber Risks and Threats

Cyber risks represent the underlying weak spots within an organization’s ecosystem, encompassing human factors, physical locations, and network infrastructures. These risks, can be meticulously evaluated for their probability and the extent of their potential damage, painting a vivid picture of the organization’s vulnerability landscape. For instance, a company operating a cloud-based software platform in a single region without redundancy is taking a calculated risk due to cost considerations because while the likelihood of a complete regional failure may be low, the potential impact is significant. Therefore, such risks are generally accepted after thorough evaluation, with the understanding that they can be managed or remediated to a certain extent.

Cyber threats on the other hand, symbolize unpredictable and unidentified dangers that can emerge from both inside and outside of an organization. These threats may be deliberate, such as a cybercriminal orchestrating a system breach, or accidental, like an uninformed employee unwittingly opening a door to attackers. Threats are multifaceted and require constant vigilance. Unlike risks, threats demand immediate and often continuous responses to mitigate potential damage.

Challenges in Cyber Risk Assessment and Threat Response

One of the primary challenges in cybersecurity is distinguishing between risk assessment and threat response. Responding to threats is often more straightforward because many organizations have established platforms and protocols to manage threat responses automatically. These systems, such as endpoint protection or firewalls, are designed to detect and neutralize threats in real-time.

However, cyber risk evaluation is more complex and labor-intensive, as it involves identifying potential vulnerabilities, assessing their likelihood and impact, and prioritizing them based on the organization’s risk appetite. This process requires significant human effort and expertise, making it more challenging than automated threat response. Quantifying these risks to communicate effectively with stakeholders, particularly at the executive level, adds another layer of complexity. Organizations must present a clear cost-benefit analysis, illustrating how mitigating certain risks aligns with the company’s strategic goals and overall mission.

Strategies for Effective Risk and Threat Management

Proactive implementation of risk and threat management strategies are non-negotiables in today’s day and age. This begins with establishing a robust risk governance process and ensuring alignment among key stakeholders. Effective communication is crucial, as it ensures that everyone understands the risks and the rationale behind the chosen mitigation strategies.

Another critical component is the mechanism for discovering and managing risks. This might involve using third-party services, internal audits, or a combination of both. Without proper identification, management of these risks becomes impossible. Additionally, having systems and automation in place to handle reactive risk management is essential. These systems should be complemented by an incident response plan to address ongoing threats that could impact performance or deliverability.

Striking a balance between proactive and reactive measures involves creating a culture of security within the organization. This means educating employees at every level about the importance of cybersecurity and how to identify and respond to potential risks and threats. By developing an environment where security is everyone’s responsibility, organizations can significantly enhance their overall cybersecurity posture.

Effective cybersecurity management is not just a technical challenge—it’s strategic. Organizations need to move beyond reactive measures and adopt a proactive stance that encompasses both risk and threat management. Companies must go beyond investing in technology and foster a culture where security is deeply embedded in every employee’s mindset. With Cybercrime predicted to cost the world $8 trillion USD in 2023, according to Cybersecurity Ventures, the urgent necessity for proactive cybersecurity measures becomes even more apparent.

It’s time for organizations to recognize that cybersecurity is a shared responsibility. Continuous education, clear communication, and unwavering commitment from all levels of the organization are essential. As we face an ever-evolving threat landscape, the key to resilience lies in our ability to adapt and respond proactively. By prioritizing both risk assessment and threat mitigation, organizations can safeguard their operations and thrive in the digital age.

About George Jones:

In his role as the CISO, George will define and drive the strategic direction of corporate IT, information security and compliance initiatives for the company, while ensuring adherence and delivery to our massive growth plans. George was most recently the Head of Information Security and Infrastructure at Catalyst Health Group, responsible for all compliance efforts (NIST, PCI, HITRUST, SOC2) as well as vendor management for security-based programs. George brings more than 20 years of experience with technology, infrastructure, compliance, and assessment in multiple roles across different business verticals.

Recently as Chief Information Officer and Founder of J-II Consulting Group, a security & compliance consultancy, George was responsible for the design and implementation of security and compliance programs for various organizations. He also delivered programs to implement Agile methodologies, DevSecOps programs, and Information Security Policy and Procedure Plans.  During his time at Atlas Technical Consultants, George drove multiple M&A due diligence and integration efforts, consolidating nine acquired business units into a single operating entity, enabling the organization to leverage greater economies of scale and more efficient operations.

George has broad and deep experiences in infrastructure, security, and compliance roles with a history of building sustainable processes and organizations that enable scaling for growth. George grew up in Austin and is a recent transplant to the Plano area. He attended Texas A&M University and graduated Magna Cum Laude from St. Edward’s University.

 

The post Mastering the Cybersecurity Tightrope: Risks and Threats in Modern Organizations appeared first on Cybersecurity Insiders.

[By Darren Guccione, CEO and Co-Founder, Keeper Security]

Cyber attacks are becoming increasingly sophisticated as malicious actors leverage emerging technology to conduct, accelerate and scale their attacks. With AI-powered attacks at the helm, today’s IT and security leaders must contend with a barrage of unprecedented cyber threats and new risks. 

 

To map the cybersecurity landscape this year, we recently commissioned an independent research firm to survey global IT and security leaders about cybersecurity trends and the future of defense. The results were alarming: 92% of survey respondents reported cyber attacks are more frequent today than one year ago, and respondents shared that they are unprepared to defeat novel threats and advanced, emerging attack techniques. In 2024, security is proving to be increasingly complex with higher stakes than ever.  

 

Targets in Today’s Threat Landscape 

Cybercriminals are creative and relentless in their mission to break historically secure solutions and inflict maximum damage, and Keeper’s survey illuminates how attackers are wreaking havoc on today’s enterprises, midmarket organizations and small businesses. Seventy-three percent of respondents have experienced a cyber attack that resulted in monetary loss, with IT services (58%) and financial operations (37%) as the top business functions most impacted by successful cyber attacks. The top three most frequently hacked industries include hospitality/travel, manufacturing and financial services, as cybercriminals are enticed by the monetary transactions and sensitive personal information that can be exploited from businesses in these verticals. 

 

AI-Powered Attacks Lead the Charge 

As attackers find new and novel ways to conduct their attacks, 95% of respondents disclosed that cyber attacks are more sophisticated than ever – and they are unprepared for this new wave of threat vectors. The overwhelming majority of survey respondents (92%) shared that cybersecurity is their number one priority, yet their efforts are not enough to contend with the increased volume and severity of cyber attacks. IT leaders feel least equipped to defeat the following attack vectors: AI-powered attacks (35%); deepfake technology (30%); 5G network exploits (29%); cloud jacking (25%); and fileless attacks (23%).

 

Survey respondents cited AI-powered attacks as the most serious emerging type of attack. As cyber threats continue to worsen and evolve, IT leaders must adapt their tactics and strategies in order to stay ahead. Survey respondents revealed that they plan to increase their overall AI security through a variety of cybersecurity tactics including data encryption (51%); advanced threat detection systems (41%); and employee training and awareness (45%).

 

In addition to creating new threats, AI is being used to scale, accelerate and improve common attack techniques. Phishing is a prime example: the majority of survey respondents cited phishing as the most common attack vector, with 61% reporting that phishing attacks target their organization. The explosion in AI tools has intensified this problem by increasing the believability of phishing scams and enabling cybercriminals to deploy them at scale. Eighty-four percent of respondents said that phishing and smishing have become more difficult to detect with the rise in popularity of AI-powered tools. 

 

Malicious actors also weaponize AI for password cracking, and stolen or weak passwords and credentials remain a leading cause of breaches. Fifty-two percent of survey respondents shared that their company’s IT team struggles with frequently stolen passwords, underscoring the importance of creating and safely storing strong, unique passwords for every account. 

 

Implement a Proactive Approach to Cybersecurity

The barrage of attacks today’s IT leaders must combat highlights the need for proactive cybersecurity strategies that can counter both existing and burgeoning threat vectors. While the threat landscape is changing, the fundamental rules of protecting an organization in the digital landscape remain the same. In addition to common best practices like adopting Multi-Factor Authentication (MFA), IT and security leaders should prioritize adoption of solutions that prevent the most prevalent cyber attacks, including a password manager to help mitigate risk by enforcing strong password practices and a Privileged Access Management (PAM) solution. 

 

PAM safeguards an organization’s vital assets by controlling and monitoring high-level access, collectively fortifying defenses and minimizing potential damage in the event of a successful cyber attack. Deploying technology that prevents both intentional and unintentional insider threats is critical, as 40% of survey respondents shared that they have experienced a cyber attack that originated from an employee.

 

Strategic-solution adoption enables organizations to create a layered security approach that stands the test of time – restricting unauthorized access and enhancing overall cybersecurity resilience – now and in the future.

 

# # # 

 

Author Bio

 

Darren Guccione is an entrepreneur, technologist, business leader, as well as the CEO and co-founder of Keeper Security, the leading provider of zero-trust and zero-knowledge cybersecurity software used globally by millions of people and thousands of businesses. Guccione is actively involved in fostering a culture of innovation in his field, having served as an advisor and board member with multiple technology organizations. Guccione was named the 2022 Editor’s Choice CEO of the Year and 2020 Publisher’s Choice Executive of the Year by Cyber Defense Magazine’s InfoSec Awards, as well as Cutting Edge CEO of the Year in 2019.

The post The Future of Defense in an Era of Unprecedented Cyber Threats appeared first on Cybersecurity Insiders.

The US Securities and Exchange Commission adopted final rules around the disclosure of cybersecurity incidents. There are two basic rules:

  1. Public companies must “disclose any cybersecurity incident they determine to be material” within four days, with potential delays if there is a national security risk.
  2. Public companies must “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats” in their annual filings.

The rules go into effect this December.

In an email newsletter, Melissa Hathaway wrote:

Now that the rule is final, companies have approximately six months to one year to document and operationalize the policies and procedures for the identification and management of cybersecurity (information security/privacy) risks. Continuous assessment of the risk reduction activities should be elevated within an enterprise risk management framework and process. Good governance mechanisms delineate the accountability and responsibility for ensuring successful execution, while actionable, repeatable, meaningful, and time-dependent metrics or key performance indicators (KPI) should be used to reinforce realistic objectives and timelines. Management should assess the competency of the personnel responsible for implementing these policies and be ready to identify these people (by name) in their annual filing.

News article.

Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC: “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar.

I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said.

In my talk at the RSA Conference last month, I talked about the power level of our species becoming too great for our systems of governance. Talking about those systems, I said:

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

That was what I was thinking about when I agreed to sign on to the statement: “Pandemics, nuclear weapons, AI—yeah, I would put those three in the same bucket. Surely we can spend the same effort on AI risk as we do on future pandemics. That’s a really low bar.” Clearly I should have focused on the word “extinction,” and not the relative comparisons.

Seth Lazar, Jeremy Howard, and Arvind Narayanan wrote:

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there­—ne that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

I agree with that, and with their follow up:

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom.

This is what I wrote in Click Here to Kill Everybody (2018):

I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the culmination of several precursor technologies, like machine learning algorithms, automation, and autonomy. The security risks from those precursor technologies are already with us, and they’re increasing as the technologies become more powerful and more prevalent. So, while I am worried about intelligent and even driverless cars, most of the risks arealready prevalent in Internet-connected drivered cars. And while I am worried about robot soldiers, most of the risks are already prevalent in autonomous weapons systems.

Also, as roboticist Rodney Brooks pointed out, “Long before we see such machines arising there will be the somewhat less intelligent and belligerent machines. Before that there will be the really grumpy machines. Before that the quite annoying machines. And before them the arrogant unpleasant machines.” I think we’ll see any new security risks coming long before they get here.

I do think we should worry about catastrophic AI and robotics risk. It’s the fact that they affect the world in a direct, physical manner—and that they’re vulnerable to class breaks.

(Other things to read: David Chapman is good on scary AI. And Kieran Healy is good on the statement.)

Okay, enough. I should also learn not to sign on to group statements.

Ted Chiang has an excellent essay in the New Yorker: “Will A.I. Become the New McKinsey?”

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans­—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

EDITED TO ADD: Ted Chiang’s previous essay, “ChatGPT Is a Blurry JPEG of the Web” is also worth reading.

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?

For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.

Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4—one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.

Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. It could write the first draft of anything: emails, reports, essays, even wedding vows. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could act as your tutor, answering questions interactively on topics you want to learn about—in the manner that suits you best and taking into account what you already know. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could advocate on your behalf with third parties: either other humans or other bots. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic; or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities.

Today’s AIs aren’t up for the task. The problem isn’t the technology—that’s advancing faster than even the experts had guessed—it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.

The transition from awe and eager utilization to suspicion to disillusionment is a well worn one in the technology sector. Twenty years ago, Google’s search engine rapidly rose to monopolistic dominance because of its transformative information retrieval capability. Over time, the company’s dependence on revenue from search advertising led them to degrade that capability. Today, many observers look forward to the death of the search paradigm entirely. Amazon has walked the same path, from honest marketplace to one riddled with lousy products whose vendors have paid to have the company show them to you. We can do better than this. If each of us are going to have an AI assistant helping us with essential activities daily and even advocating on our behalf, we each need to know that it has our interests in mind. Building trustworthy AI will require systemic change.

First, a trustworthy AI system must be controllable by the user. That means that the model should be able to run on a user’s owned electronic devices (perhaps in a simplified form) or within a cloud service that they control. It should show the user how it responds to them, such as when it makes queries to search the web or external services, when it directs other software to do things like sending an email on a user’s behalf, or modifies the user’s prompts to better express what the company that made it thinks the user wants. It should be able to explain its reasoning to users and cite its sources. These requirements are all well within the technical capabilities of AI systems.

Furthermore, users should be in control of the data used to train and fine-tune the AI system. When modern LLMs are built, they are first trained on massive, generic corpora of textual data typically sourced from across the Internet. Many systems go a step further by fine-tuning on more specific datasets purpose built for a narrow application, such as speaking in the language of a medical doctor, or mimicking the manner and style of their individual user. In the near future, corporate AIs will be routinely fed your data, probably without your awareness or your consent. Any trustworthy AI system should transparently allow users to control what data it uses.

Many of us would welcome an AI-assisted writing application fine tuned with knowledge of which edits we have accepted in the past and which we did not. We would be more skeptical of a chatbot knowledgeable about which of their search results led to purchases and which did not.

You should also be informed of what an AI system can do on your behalf. Can it access other apps on your phone, and the data stored with them? Can it retrieve information from external sources, mixing your inputs with details from other places you may or may not trust? Can it send a message in your name (hopefully based on your input)? Weighing these types of risks and benefits will become an inherent part of our daily lives as AI-assistive tools become integrated with everything we do.

Realistically, we should all be preparing for a world where AI is not trustworthy. Because AI tools can be so incredibly useful, they will increasingly pervade our lives, whether we trust them or not. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.

In the world’s first few months of widespread use of models like ChatGPT, we’ve learned a lot about how AI creates risks for users. Everyone has heard by now that LLMs “hallucinate,” meaning that they make up “facts” in their outputs, because their predictive text generation systems are not constrained to fact check their own emanations. Many users learned in March that information they submit as prompts to systems like ChatGPT may not be kept private after a bug revealed users’ chats. Your chat histories are stored in systems that may be insecure.

Researchers have found numerous clever ways to trick chatbots into breaking their safety controls; these work largely because many of the “rules” applied to these systems are soft, like instructions given to a person, rather than hard, like coded limitations on a product’s functions. It’s as if we are trying to keep AI safe by asking it nicely to drive carefully, a hopeful instruction, rather than taking away its keys and placing definite constraints on its abilities.

These risks will grow as companies grant chatbot systems more capabilities. OpenAI is providing developers wide access to build tools on top of GPT: tools that give their AI systems access to your email, to your personal account information on websites, and to computer code. While OpenAI is applying safety protocols to these integrations, it’s not hard to imagine those being relaxed in a drive to make the tools more useful. It seems likewise inevitable that other companies will come along with less bashful strategies for securing AI market share.

Just like with any human, building trust with an AI will be hard won through interaction over time. We will need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions. Building trust in that way is only possible if these systems are transparent about their capabilities, what inputs they use and when they will share them, and whose interests they are evolving to represent.

This essay was written with Nathan Sanders, and previously appeared on Gizmodo.com.

Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic.

Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed without institutions fully understanding the security risks they pose. Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle. It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features. (Andy Grotto and I have vigorously argued against siloing AI security in its own governance and policy vertical.)

Our report also recommends more collaboration between cybersecurity practitioners, machine learning engineers, and adversarial machine learning researchers. Assessing AI vulnerabilities requires technical expertise that is distinct from the skill set of cybersecurity practitioners, and organizations should be cautioned against repurposing existing security teams without additional training and resources. We also note that AI security researchers and practitioners should consult with those addressing AI bias. AI fairness researchers have extensively studied how poor data, design choices, and risk decisions can produce biased outcomes. Since AI vulnerabilities may be more analogous to algorithmic bias than they are to traditional software vulnerabilities, it is important to cultivate greater engagement between the two communities.

Another major recommendation calls for establishing some form of information sharing among AI developers and users. Right now, even if vulnerabilities are identified or malicious attacks are observed, this information is rarely transmitted to others, whether peer organizations, other companies in the supply chain, end users, or government or civil society observers. Bureaucratic, policy, and cultural barriers currently inhibit such sharing. This means that a compromise will likely remain mostly unnoticed until long after attackers have successfully exploited vulnerabilities. To avoid this outcome, we recommend that organizations developing AI models monitor for potential attacks on AI systems, create—formally or informally—a trusted forum for incident information sharing on a protected basis, and improve transparency.

We know that complexity is the worst enemy of security, because it makes attack easier and defense harder. This becomes catastrophic as the effects of that attack become greater.

In A Hacker’s Mind (coming in February 2023), I write:

Our societal systems, in general, may have grown fairer and more just over the centuries, but progress isn’t linear or equitable. The trajectory may appear to be upwards when viewed in hindsight, but from a more granular point of view there are a lot of ups and downs. It’s a “noisy” process.

Technology changes the amplitude of the noise. Those near-term ups and downs are getting more severe. And while that might not affect the long-term trajectories, they drastically affect all of us living in the short term. This is how the twentieth century could—statistically—both be the most peaceful in human history and also contain the most deadly wars.

Ignoring this noise was only possible when the damage wasn’t potentially fatal on a global scale; that is, if a world war didn’t have the potential to kill everybody or destroy society, or occur in places and to people that the West wasn’t especially worried about. We can’t be sure of that anymore. The risks we face today are existential in a way they never have been before. The magnifying effects of technology enable short-term damage to cause long-term planet-wide systemic damage. We’ve lived for half a century under the potential specter of nuclear war and the life-ending catastrophe that could have been. Fast global travel allowed local outbreaks to quickly become the COVID-19 pandemic, costing millions of lives and billions of dollars while increasing political and social instability. Our rapid, technologically enabled changes to the atmosphere, compounded through feedback loops and tipping points, may make Earth much less hospitable for the coming centuries. Today, individual hacking decisions can have planet-wide effects. Sociobiologist Edward O. Wilson once described the fundamental problem with humanity is that “we have Paleolithic emotions, medieval institutions, and godlike technology.”

Technology could easily get to the point where the effects of a successful attack could be existential. Think biotech, nanotech, global climate change, maybe someday cyberattack—everything that people like Nick Bostrom study. In these areas, like everywhere else in past and present society, the technologies of attack develop faster the technologies of defending against attack. But suddenly, our inability to be proactive becomes fatal. As the noise due to technological power increases, we reach a threshold where a small group of people can irrecoverably destroy the species. The six-sigma guy can ruin it for everyone. And if they can, sooner or later they will. It’s possible that I have just explained the Fermi paradox.

Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.”

Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization.

Model spinning introduces a “meta-backdoor” into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary.

Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims.

To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call “pseudo-words,” and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary’s meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models.

This new attack dovetails with something I’ve been worried about for a while, something Latanya Sweeney has dubbed “persona bots.” This is what I wrote in my upcoming book (to be published in February):

One example of an extension of this technology is the “persona bot,” an AI posing as an individual on social media and other online groups. Persona bots have histories, personalities, and communication styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and to appear knowledgeable. Then, once in a while, the AI might post something relevant to a political issue, maybe an article about a healthcare worker having an allergic reaction to the COVID-19 vaccine, with worried commentary. Or maybe it might offer its developer’s opinions about a recent election, or racial justice, or any other polarizing subject. One persona bot can’t move public opinion, but what if there were thousands of them? Millions?

These are chatbots on a very small scale. They would participate in small forums around the Internet: hobbyist groups, book groups, whatever. In general they would behave normally, participating in discussions like a person does. But occasionally they would say something partisan or political, depending on the desires of their owners. Because they’re all unique and only occasional, it would be hard for existing bot detection techniques to find them. And because they can be replicated by the millions across social media, they could have a greater effect. They would affect what we think, and—just as importantly—what we think others think. What we will see as robust political discussions would be persona bots arguing with other persona bots.

Attacks like these add another wrinkle to that sort of scenario.

CI/CD is a recommended technique for DevOps teams and a best practice in agile methodology. CI/CD is a method for consistently delivering apps to clients by automating the app development phases. Continuous integration, continuous delivery, and continuous deployment are the key concepts. CI/CD adds continuous automation and monitoring throughout the whole application lifetime, from the […]… Read More

The post Everything You Need to Know About CI/CD and Security appeared first on The State of Security.