ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades.

Threats of this sort seem urgent and disturbing because they’re salient. We know what to look for, and we can easily imagine their effects.

The truth is, the future will be much more interesting. And even some of the most stupendous potential impacts of AI on politics won’t be all bad. We can draw some fairly straight lines between the current capabilities of AI tools and real-world outcomes that, by the standards of current public understanding, seem truly startling.

With this in mind, we propose six milestones that will herald a new era of democratic politics driven by AI. All feel achievable—perhaps not with today’s technology and levels of AI adoption, but very possibly in the near future.

Good benchmarks should be meaningful, representing significant outcomes that come with real-world consequences. They should be plausible; they must be realistically achievable in the foreseeable future. And they should be observable—we should be able to recognize when they’ve been achieved.

Worries about AI swaying an election will very likely fail the observability test. While the risks of election manipulation through the robotic promotion of a candidate’s or party’s interests is a legitimate threat, elections are massively complex. Just as the debate continues to rage over why and how Donald Trump won the presidency in 2016, we’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.

Thinking further into the future: Could an AI candidate ever be elected to office? In the world of speculative fiction, from The Twilight Zone to Black Mirror, there is growing interest in the possibility of an AI or technologically assisted, otherwise-not-traditionally-eligible candidate winning an election. In an era where deepfaked videos can misrepresent the views and actions of human candidates and human politicians can choose to be represented by AI avatars or even robots, it is certainly possible for an AI candidate to mimic the media presence of a politician. Virtual politicians have received votes in national elections, for example in Russia in 2017. But this doesn’t pass the plausibility test. The voting public and legal establishment are likely to accept more and more automation and assistance supported by AI, but the age of non-human elected officials is far off.

Let’s start with some milestones that are already on the cusp of reality. These are achievements that seem well within the technical scope of existing AI technologies and for which the groundwork has already been laid.

Milestone #1: The acceptance by a legislature or agency of a testimony or comment generated by, and submitted under the name of, an AI.

Arguably, we’ve already seen legislation drafted by AI, albeit under the direction of human users and introduced by human legislators. After some early examples of bills written by AIs were introduced in Massachusetts and the US House of Representatives, many major legislative bodies have had their “first bill written by AI,” “used ChatGPT to generate committee remarks,” or “first floor speech written by AI” events.

Many of these bills and speeches are more stunt than serious, and they have received more criticism than consideration. They are short, have trivial levels of policy substance, or were heavily edited or guided by human legislators (through highly specific prompts to large language model-based AI tools like ChatGPT).

The interesting milestone along these lines will be the acceptance of testimony on legislation, or a comment submitted to an agency, drafted entirely by AI. To be sure, a large fraction of all writing going forward will be assisted by—and will truly benefit from—AI assistive technologies. So to avoid making this milestone trivial, we have to add the second clause: “submitted under the name of the AI.”

What would make this benchmark significant is the submission under the AI’s own name; that is, the acceptance by a governing body of the AI as proffering a legitimate perspective in public debate. Regardless of the public fervor over AI, this one won’t take long. The New York Times has published a letter under the name of ChatGPT (responding to an opinion piece we wrote), and legislators are already turning to AI to write high-profile opening remarks at committee hearings.

Milestone #2: The adoption of the first novel legislative amendment to a bill written by AI.

Moving beyond testimony, there is an immediate pathway for AI-generated policies to become law: microlegislation. This involves making tweaks to existing laws or bills that are tuned to serve some particular interest. It is a natural starting point for AI because it’s tightly scoped, involving small changes guided by a clear directive associated with a well-defined purpose.

By design, microlegislation is often implemented surreptitiously. It may even be filed anonymously within a deluge of other amendments to obscure its intended beneficiary. For that reason, microlegislation can often be bad for society, and it is ripe for exploitation by generative AI that would otherwise be subject to heavy scrutiny from a polity on guard for risks posed by AI.

Milestone #3: AI-generated political messaging outscores campaign consultant recommendations in poll testing.

Some of the most important near-term implications of AI for politics will happen largely behind closed doors. Like everyone else, political campaigners and pollsters will turn to AI to help with their jobs. We’re already seeing campaigners turn to AI-generated images to manufacture social content and pollsters simulate results using AI-generated respondents.

The next step in this evolution is political messaging developed by AI. A mainstay of the campaigner’s toolbox today is the message testing survey, where a few alternate formulations of a position are written down and tested with audiences to see which will generate more attention and a more positive response. Just as an experienced political pollster can anticipate effective messaging strategies pretty well based on observations from past campaigns and their impression of the state of the public debate, so can an AI trained on reams of public discourse, campaign rhetoric, and political reporting.

With these near-term milestones firmly in sight, let’s look further to some truly revolutionary possibilities. While these concepts may have seemed absurd just a year ago, they are increasingly conceivable with either current or near-future technologies.

Milestone #4: AI creates a political party with its own platform, attracting human candidates who win elections.

While an AI is unlikely to be allowed to run for and hold office, it is plausible that one may be able to found a political party. An AI could generate a political platform calculated to attract the interest of some cross-section of the public and, acting independently or through a human intermediary (hired help, like a political consultant or legal firm), could register formally as a political party. It could collect signatures to win a place on ballots and attract human candidates to run for office under its banner.

A big step in this direction has already been taken, via the campaign of the Danish Synthetic Party in 2022. An artist collective in Denmark created an AI chatbot to interact with human members of its community on Discord, exploring political ideology in conversation with them and on the basis of an analysis of historical party platforms in the country. All this happened with earlier generations of general purpose AI, not current systems like ChatGPT. However, the party failed to receive enough signatures to earn a spot on the ballot, and therefore did not win parliamentary representation.

Future AI-led efforts may succeed. One could imagine a generative AI with skills at the level of or beyond today’s leading technologies could formulate a set of policy positions targeted to build support among people of a specific demographic, or even an effective consensus platform capable of attracting broad-based support. Particularly in a European-style multiparty system, we can imagine a new party with a strong news hook—an AI at its core—winning attention and votes.

Milestone #5: AI autonomously generates profit and makes political campaign contributions.

Let’s turn next to the essential capability of modern politics: fundraising. “An entity capable of directing contributions to a campaign fund” might be a realpolitik definition of a political actor, and AI is potentially capable of this.

Like a human, an AI could conceivably generate contributions to a political campaign in a variety of ways. It could take a seed investment from a human controlling the AI and invest it to yield a return. It could start a business that generates revenue. There is growing interest and experimentation in auto-hustling: AI agents that set about autonomously growing businesses or otherwise generating profit. While ChatGPT-generated businesses may not yet have taken the world by storm, this possibility is in the same spirit as the algorithmic agents powering modern high-speed trading and so-called autonomous finance capabilities that are already helping to automate business and financial decisions.

Or, like most political entrepreneurs, AI could generate political messaging to convince humans to spend their own money on a defined campaign or cause. The AI would likely need to have some humans in the loop, and register its activities to the government (in the US context, as officers of a 501(c)(4) or political action committee).

Milestone #6: AI achieves a coordinated policy outcome across multiple jurisdictions.

Lastly, we come to the most meaningful of impacts: achieving outcomes in public policy. Even if AI cannot—now or in the future—be said to have its own desires or preferences, it could be programmed by humans to have a goal, such as lowering taxes or relieving a market regulation.

An AI has many of the same tools humans use to achieve these ends. It may advocate, formulating messaging and promoting ideas through digital channels like social media posts and videos. It may lobby, directing ideas and influence to key policymakers, even writing legislation. It may spend; see milestone #5.

The “multiple jurisdictions” piece is key to this milestone. A single law passed may be reasonably attributed to myriad factors: a charismatic champion, a political movement, a change in circumstances. The influence of any one actor, such as an AI, will be more demonstrable if it is successful simultaneously in many different places. And the digital scalability of AI gives it a special advantage in achieving these kinds of coordinated outcomes.

The greatest challenge to most of these milestones is their observability: will we know it when we see it? The first campaign consultant whose ideas lose out to an AI may not be eager to report that fact. Neither will the campaign. Regarding fundraising, it’s hard enough for us to track down the human actors who are responsible for the “dark money” contributions controlling much of modern political finance; will we know if a future dominant force in fundraising for political action committees is an AI?

We’re likely to observe some of these milestones indirectly. At some point, perhaps politicians’ dollars will start migrating en masse to AI-based campaign consultancies and, eventually, we may realize that political movements sweeping across states or countries have been AI-assisted.

While the progression of technology is often unsettling, we need not fear these milestones. A new political platform that wins public support is itself a neutral proposition; it may lead to good or bad policy outcomes. Likewise, a successful policy program may or may not be beneficial to one group of constituents or another.

We think the six milestones outlined here are among the most viable and meaningful upcoming interactions between AI and democracy, but they are hardly the only scenarios to consider. The point is that our AI-driven political future will involve far more than deepfaked campaign ads and manufactured letter-writing campaigns. We should all be thinking more creatively about what comes next and be vigilant in steering our politics toward the best possible ends, no matter their means.

This essay was written with Nathan Sanders, and previously appeared in MIT Technology Review.

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.

You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.

Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.

There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable—not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.

Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies—those with substantial revenue—would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.

Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.

Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.

The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.

Our plan would apply to generative AI used in the US. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.

Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable, and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks—both every day and esoteric—posed by AI, and the government may need to develop policies to remedy any harms that arise.

Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides—particularly since this new technology isn’t possible without our contribution.

This essay was written with Barath Raghavan, and previously appeared on Politico.com.

The number of cybersecurity incidents has risen sharply over the past two years: The compulsive digitization projects during the pandemic years left many organizations’ perimeters in shambles. Now, Russia’s war of aggression – which might go down in history as the first truly hybrid war, fought fiercely both on traditional and on cyber battlefields – is threatening these vulnerable infrastructures. This has not gone unnoticed by the political players and federal agencies of the transatlantic alliance: On both sides of the Atlantic, administrations are vehemently advocating holistic security approaches, be it in White House Executive Orders, the compendia of the German BSI (Federal Office for Information Security) or the British National Cyber Strategy.

A common component of the ambitious government frameworks is their holistic approach to cyber defense, leveraging principles such as Zero Trust, Least Privilege and Security-by-Design to help companies build stronger and more resilient environments and applications – and thus, to protect their assets and strengthen the overall economy. However, most IT teams in the private sector are understaffed, and many lack the cyber security expertise required to make this kind of fundamental changes to their IT and security stacks. This could prove fatal, especially for vendors in five select industries which represent especial attractive targets for attackers. Let’s take a look at these branches, and discuss how players in these sectors can confidently ensure a high degree of protection by following the Center of Internet Security’s (CIS) Critical Security Controls and Privileged Access Management (PAM) recommendations.

Financial Services Industry

The financial services industry has always been a prime target of cybercriminal activity. The attacks are usually financially motivated. In the worst case, a successful breach might even grant the attackers direct access to the deposits of bank customers and investors. In addition, most financial institutions also manage vast amounts of sensitive, highly valuable data for their customers: from personal financial data and business-critical information to insider information or data from data-driven businesses.

To exacerbate matters, the financial sector is currently undergoing a dynamic, if not disruptive, digital transformation: An agile swarm of aggressive young challenger banks is setting itself apart from the traditional market with innovative digital service offerings, forcing established institutions to digitize at full speed as well. All of this is rapidly increasing the dependency on technology and data across the industry, and the growing attack surfaces offer hackers countless new attack vectors.

Healthcare

The healthcare industry has also been one of the top targets for cybercriminals for many years. After all, healthcare providers’ servers arguably hold the most sensitive and tightly regulated data in the world – and these are of enormous value.

According to recent studies, healthcare saw a 200% year-over-year increase in cyberattacks in the first pandemic year alone. At a staggering 97%, web application and application-specific attacks accounted for the lion’s share of malicious activity. This can be attributed to the newly opened network infrastructures: During the pandemic, both medical staff and patients have increasingly started to access central resources as part of telemedicine concepts, and while this often improves patient care, it also creates additional points of attack.

In addition to the ubiquitous identity theft and ransomware attacks, cyber reconnaissance is playing an increasingly important role in healthcare institutions and healthcare research. A prime example is the recent attack on the European Medicines Agency (EMA), where attackers illegally accessed confidential vaccine documents.

Construction Industry

Let us have a look at the most unexpected entry on the list: According to several recent studies (e.g., the “Hiscox Cyber Readiness Report 2021” by specialist insurer Hiscox and Forrester Consulting), almost half (46%) of construction companies have been the victim of a cyberattack.

Even though many experts believe that the construction industry has been very reluctant to digitize, there is no doubt that more and more business processes are being shifted to the IT world. And as is always the case when digitizing, caution is advised: Anyone who is working with construction plans, project evaluations, and other confidential information needs to apply due diligence to avoid damage and financial losses.

The example of French construction company Ingérop illustrates how big the damage potential in the construction industry really is: In 2018, around 65 gigabytes of data were stolen from Ingérop via a German server – including a large number of documents from critical infrastructure facilities such as nuclear power plants and nuclear waste repositories, high-security prisons, and public transport networks, not to mention personal data from over 1,200 employees.

IT and Telecommunications Industry

The recent cloud and digitization boom has permanently changed the ICT industry and made it much more relevant, but also more complex. Multiple surveys document that a vast majority of IT executives worldwide consider the sprawling complexity of the tech stack as a major problem in their organization. They also expect cybercrime to increase in 2022: With the rapid rise of mobile endpoints, smart IoT devices, and open APIs, the volume and value of data processed worldwide will increase significantly and the companies’ attack surface will also continue to grow. ICT companies must therefore take care not only to advance their products and infrastructures but also to continuously optimize their security stacks.

Small & Medium Businesses

Last year’s digitization boom has fundamentally changed small and medium-sized companies: To maintain business continuity during the pandemic, extensive investments in new digital equipment were required – just think about hybrid workplaces –, which could not be postponed and were often carried by governmental digitization initiatives. However, these digitization projects were rarely accompanied by similarly ambitious security investments, so there is a lot of catching up to do in terms of cybersecurity.

While most large companies employ dedicated staff or entire departments for cybersecurity, SMEs are often inadequately protected due to a lack of resources: Only about half of them have access to well-rounded in-house security experts. For attackers, this naturally represents an attractive target, the proverbial “path of least resistance”.

So, SMEs have their work cut out for them: Despite their limited budgets, they need to mitigate potential attack vectors as comprehensively as possible. This also means they must prepare for the worst-case scenario – a successful breach – by preventing lateral movement through their network.

Privileged Access Management for a Secure Access
As different as the five industries may be, the majority of cyberattacks follow the same pattern: First, the attackers gain access to the network, often by stealing or phishing credentials. Then, they move laterally from system to system, escalating their access rights until they find the company’s crown jewels. These are then stolen, encrypted, or destroyed – depending on what promises the highest profit.

The only real protection against these kinds of attacks is a stringent Privileged Access Management (PAM), specifically for privileged accounts with far-reaching rights. The foundation of this strategy is the so-called least privilege principle, which also is a important component for Homeland Security’s Cybersecurity and Infrastructure Agency (CISA), as well as for the German BSI and the British National Cyber Strategy: Authenticated users are always only granted a minimum level of privileges for a limited period – and precisely get the access rights they need to fulfill their current task. A robust PAM solution should also support strong multi-factor authentication (MFA) and a seamless password management strategy, e.g., with automated password updates for network accounts and the secure storage of critical credentials in secure vaults. This allows IT teams to successfully restrict access to critical data such as infrastructure accounts, DevOps access, or SSH key pairs. For optimal protection, Red Team trainings, advanced audits, and dedicated employee trainings have proven effective in protecting against social engineering.

CIS Critical Security Controls
While most organizations have some PAM components in place, most lack a comprehensive strategy that addresses the issue holistically and offers full protection. This is why the non-profit Center for Internet Security (CIS) provides a set of holistic best practices through its regularly updated Critical Security Controls Framework (CSC). The 20-point framework helps companies put every aspect of their cybersecurity to the test. Particularly relevant for KRITIS-regulated companies: The current eighth edition puts a strong focus on the topics of “Access Control Management” and “Privileged Access Management” and includes multiple actionable recommendations for security practitioners to protect their privileged accounts and to implement a consistent cybersecurity strategy.

Conclusion
As recently as March 21, 2022, Joe Biden explicitly warned about Russian cyberattacks and called on companies to “harden your cyber defences immediately”. The powerful choice of words underscores the high level of risk that political decision-makers currently perceive. Cyberattacks have been on the rise for many years, but both the pandemic and the war could exponentially accelerate the threat levels. Organizations looking to ensure safe and resilient operations need to rethink their cybersecurity approach, and to position themselves more securely in cyberspace. This is especially true for enterprises from the financial, healthcare, construction and ITC sectors, as well as SMEs. These five are among the prime targets, and need to be aware of the relevance of their assets and data. Implementing a holistic PAM strategy is a very effective and quick measure to improve the security posture. In the long term, however, companies need revise their entire security stack along current best practices – and thus set the course for failsafe and resilient business operations with low operational risks.

The post Policy Recommendations for a Holistic Cybersecurity: Five Industries Under Attack, and What They Should Do? first appeared on Cyber Insights.

The post Policy Recommendations for a Holistic Cybersecurity: Five Industries Under Attack, and What They Should Do? appeared first on Cyber Insights.

When we get into cybersecurity, one of the first things any organisation or company should do is write a cybersecurity policy, one that is owned by all. Easy words to put down on paper, but what do they mean? So, what is a cybersecurity policy? Well, it is defined in the Gartner IT Glossary as, “an […]… Read More

The post Cybersecurity Policy – time to think outside the box? appeared first on The State of Security.