Twitter is having intermittent problems with its two-factor authentication system:

Not all users are having problems receiving SMS authentication codes, and those who rely on an authenticator app or physical authentication token to secure their Twitter account may not have reason to test the mechanism. But users have been self-reporting issues on Twitter since the weekend, and WIRED confirmed that on at least some accounts, authentication texts are hours delayed or not coming at all. The meltdown comes less than two weeks after Twitter laid off about half of its workers, roughly 3,700 people. Since then, engineers, operations specialists, IT staff, and security teams have been stretched thin attempting to adapt Twitter’s offerings and build new features per new owner Elon Musk’s agenda.

On top of that, it seems that the system has a new vulnerability:

A researcher contacted Information Security Media Group on condition of anonymity to reveal that texting “STOP” to the Twitter verification service results in the service turning off SMS two-factor authentication.

“Your phone has been removed and SMS 2FA has been disabled from all accounts,” is the automated response.

The vulnerability, which ISMG verified, allows a hacker to spoof the registered phone number to disable two-factor authentication. That potentially exposes accounts to a password reset attack or account takeover through password stuffing.

This is not a good sign.

Cybersecurity teams continue to face ongoing challenges in safeguarding their networks. With increased susceptibility to cyberattacks, organizations are taking a more proactive approach to realize “zero trust,” including the U.S. Cyber Defense team.

The Pentagon recently announced the planning of a new zero-trust strategy that will be revealed in the next coming days. Specifically, the strategy will expand the Pentagon’s approach to realizing zero trust; incorporating over a hundred activities and ‘pillars’ that include applications, automation, and analytics. The strategy aims to keep critical data secure within high-risk environments.

Officials have set a five-year deadline to implement effective zero-trust solutions. With cyber capabilities of other nation states continuously improving and evolving. the U.S. is more susceptible to digital aggressiveness. The United States is aiming to meet the cyber security challenge head-on by updating the zero trust, trust and verify approach.

So, how can these strategies be implemented across the private and public sectors?

To realize, zero trust’s full potential, The  Federal Government must bear the full scope of its authority and resources to ensure the protection and security of our national and economic assets. The policy of the U.S. administration sets the precedent for how organizations should work to prevent, detect, assess and remediate cyber incidents.

Organizations can respond by aligning their current infrastructures with national cybersecurity initiatives by integrating the following tips:

Use Tools Designed to Achieve Visibility Across On-Premises and Attack Surfaces

“Last year, the White House’s Executive Order 14028, “Improving the Nation’s Cybersecurity,” recognized the need to adopt zero-trust models across federal agencies. I am excited to see our national administration continue to acknowledge the sophistication of the threat landscape and implement this new zero-trust strategy that bears the full scope of the Department of Defense (DoD)’s authority and resources in protecting and securing our data environment; shares Jeannie Warner, director of product marketing at Exabeam.

A compromise could come at any point within the ecosystem, and more often than not it will come from an adversary using valid credentials. It’s clear that “watching the watchers” in security terms is important. This is where Threat Detection, Investigation, and Response (TDIR) capabilities should be focused, and why any security operations team needs to consider having visibility of their identity management, security log management, and other threat detection tools across their on-premises and cloud attack surfaces.”

Application and API Security

“The shortcoming in the current government strategies and directives related to Zero Trust is a complete absence of consideration for the applications that ride on the cloud and data center infrastructure that gets the majority of the ZT attention. In order to achieve Zero Trust, application security and API security can’t be left out of the equation; shares Richard Bird, CSO,  of Traceable AI.

“Zero Trust without API security is simply, not Zero Trust. If energy, dollars and effort to apply Zero Trust is entirely focused on the infrastructure and OS components of cloud, data center or hybrid deployment patterns the bad actors will simply move their efforts to the attack surface that isn’t conditioned to Zero Trust. In every organization and agency on the planet, that attack surface is APIs and the applications they interact with.”

“The last several months of exploits and breaches around the world clearly show that the US government, while on the right track in driving organizations and agencies to move to the Zero Trust framework, is missing substantial direction to those same organizations as it relates to applications and APIs. The framework today overly relies on notions such as privileged access management to achieve some semblance of Zero Trust type control for applications, but this approach has proven to be woefully inadequate for user populations outside of the technology workers who access those applications.”

The Proper Authentication of Digital Assets

The key to defending an organization is not placing inherent trust in perimeter-based security systems. That’s why authorization is a critical aspect of zero-trust architecture. Integrating authorization within critical infrastructures ensures that the user accessing a system is who they claim to be and determines which individuals are granted access. This provides an extra level of security in protecting critical assets.

“In today’s world, you cannot put your trust in any static, perimeter-based security system; Gal Helemsi, CTO and CPO of  PlainID shares. “the key to defending an organization from future cyberattacks is protecting the data and the applications, by ensuring that even if a bad actor (which can be a federal employee sometimes) has gained access credentials, they don’t have automatic access to any or all data.

Let’s face it, zero-trust is the only way to secure a modern, decentralized enterprise, in which data and applications are accessed from anywhere by employees, customers, and partners.”

Implement the ‘Right’ Tools for Your Environment

Zero trust is used to denote cybersecurity paradigms that pivot from static, perimeter-based networks to users, assets, and resources. Zero trust helps reduce security breaches, by ensuring all access points are validated before a user is trusted with access to a given network. As a result, organizations rely on Zero Trust architectures to construct how users and entities are connected to organizational and agency resources. In building out robust architectures, organizations have the ability to operate under the least privilege of authorization, keeping the role and function in line with individual capabilities.

With the rise of remote and hybrid working environments, it’s essential that organizations build Zero Trust strategies and tools that acutely align with their company’s infrastructure.

Justin McCarthy, Co-Founder and CTO of Strong DM agrees that developing Zero Trust strategies is an essential step towards mitigating cyber risk. He shares; “Zero Trust security believes that a breach will inevitably occur in addition to acknowledging that threats exist both inside and outside of the network. Because of this, it continuously scans for malicious behavior and restricts user access to what is necessary to complete the task. In addition, users (including potential bad actors) are prevented from navigating the network laterally and accessing any unrestricted data.

“Some may say that Zero Trust will hinder productivity, which could be the case if backend management processes and governance operations are granted manually. But it’s the opposite if you have the right tools to make it easy to grant access and audit access control. The result of Zero Trust architecture, especially when it comes to improving the nation’s cybersecurity is higher overall levels of security, easy accessibility, and reduced operational overhead.”

In addition, with companies moving towards data-centric processes, the volume of personally identifiable data is growing exponentially. This massive amount of data is directly linked to everyday people; who often use cloud-based systems for the storage of critical aspects. This poses additional security risks.

While cybersecurity is a complex issue, a direct route to solving malicious attacks is to create strong guardrails around our sensitive data.

“Sensitive data compromise comes from cybercriminals using privileged credentials to access data repositories;” says Arti Raman, founder, and CEO of Titanium.  “Traditional methods of data security such as encryption-at-rest fail to prevent data compromise because these controls cannot distinguish legitimate users from attackers with stolen credentials. One of the most effective solutions to eliminate data compromise and implement true zero trust for data is encryption-in-use or data-in-use encryption.

“Using data-in-use encryption ensures data and IP are encrypted and protected even when it is being actively utilized, neutralizing all possible data-related leverage that attackers could gain, and limiting the blast radius of cyberattacks. Encryption-in-use is one of the strongest and most effective guardrails that can be implemented toward zero-trust data security.”

The post The Nature of Cybersecurity Defense: Pentagon To Reveal Updated Zero-Trust Cybersecurity Strategy & Guidelines appeared first on Cybersecurity Insiders.

By Jacob Ideskog, CTO at Curity

The adoption of Open Banking has increased rapidly over recent years and has had a revolutionary impact on financial institutions and on the experience consumers have when interacting with finance products. According to the OBIE 5 million people are now using Open Banking in the UK, as the benefits of the new products and services  begin to be recognized by consumers and businesses alike.

However, the rapid rise of Open Finance has also coincided with concerns about the compliance and security risk that it poses. Curity’s latest report ‘Facilitating the Future of Open Finance’ revealed that over 70% of organizations globally are concerned with security related issues associated with Open Banking. It’s clear that this is a significant hurdle that still needs to be overcome if the adoption of Open Banking is to continue its rise.

The cybersecurity sector has the opportunity and means to alleviate fears and be at the forefront of the adoption of this revolutionary technology.

Addressing and Alleviating Security Concerns

A key concern amongst businesses is the extensive involvement of third-party providers that Open Finance requires and the heightened security risks associated with this, as over 65% of organizations view this as a top security concern. Additionally 62% of organizations have concerns with outdated security systems that don’t support securely sharing data.

However such concerns, whilst understandable, don’t recognize the current capabilities of security solutions available such as Multi-Factor Authentication and the implementation of Government regulations such as PSD2 in the EU. Crucial elements of the Open Banking experience are Application Programming Interfaces (APIs). APIs enable  the efficient exchange of data between applications, services, and customers and can be safely used as long as security and access is properly secured. Acting as the backbone for Open Banking, applications built using APIs with correctly secured access allow backend communication between banks and financial institutions without the need to re-enter or re-share login details every time.

With regard to outdated security systems, investment will be crucial in addressing this issue. Reassuringly, 83% of all organizations surveyed do plan to invest more into Open Banking this year than the previous 12 months. This will not only allow them to update their security systems to meet the standards that Open Banking requires, but will also improve the customer experience and reassure potential users.

The foundations of Open Banking are rooted in providing consumers with choice of financial products and  how they control their finances. Therefore providing a service that is interoperable between brokers, banks and third party financial institutions can be used to better the customer experience, so that all parties are equipped with the information that they need is vital. Furthermore, investment into the deployment of modern authentication methods will be a key aspect of addressing consumer hesitancy due to security concerns and ensuring consumer buy in.

Communication will also play a crucial role, both internally and externally. As mentioned previously many concerns of both financial institutions and consumers are either already accounted for by security systems or have solutions that can be immediately implemented. It’s vital to ensure that education around Open Banking is improved to alleviate fears that in some cases are unfounded amongst businesses and consumers alike.

The role of the cybersecurity industry

Whilst there are clear concerns and issues amongst organizations across the globe, there is undeniably significant momentum behind the adoption of Open Banking.  With almost three quarters of organizations surveyed planning to introduce Open Banking in the next 18 months, cybersecurity professionals’ focus should be on ensuring this transition is as smooth as possible.

This momentum and clear intention from businesses to adopt and invest in Open Banking provides the cyber security sector with a significant opportunity to be at the forefront of this banking revolution. It will be vital for the industry to work closely alongside financial institutions to support this change and mitigate risk at every turn.

We can expect the adoption of Open Banking to continue in the short term, but its long term health and adoption is absolutely dependent on the ability of the industry to address the security concerns and hesitancy that exist.

There’s potential for Open Banking to have a revolutionary impact on the way businesses and consumers approach their finances and more and more institutions are set to incorporate it into their business. However, despite the clear benefits associated with Open Finance, this cannot be done at the expense of individuals’ security and protecting their personal and private data. This is why the cybersecurity sector plays such an important role. If the industry doesn’t effectively mitigate risk and alleviate fears, no matter how much enthusiasm and momentum there is behind Open Banking it will not realize its full potential.

The post Security and the Future of Open Finance: How to Improve Adoption Globally appeared first on Cybersecurity Insiders.

ACS Technologies (ACST), a leading provider of church management software and services in the United States, has announced its integration of the Curity Identity Server across its client-facing products.

The integration of the Curity Identity Server to ACST is driven by a desire to provide high-level security to end-users, with Curity enabling seamless identity and access management (IAM) and log-in and providing a number of different multi-factor authentication (MFA) flows to fit business needs. Previously, ACST relied on a home-grown solution that is currently being phased out and replaced by a cloud-native deployment of the Curity Identity Server in AWS.

By utilising the Curity Identity Server, ACST will be able to concentrate on its product development instead of spending time and resources building IAM and MFA infrastructure in-house. The integration of and investment in Curity’s easy-to-use, low-cost product demonstrates ACST’s commitment to end-user security and its dedication to continually improving its product for end-users.

On choosing Curity, Robert Gettys, Chief Product and Technology Officer at ACS Technologies, says, “We wanted to invest in the right security to help us allocate time to meeting the unique needs of churches across the country. Thanks to the excellent capabilities of the Curity Identity Server, we’ll be able to concentrate on developing our core products to serve our ministry partners rather than attempting to build IAM and MFA ourselves. With Curity’s support, we’ll enhance our customer offering and be better positioned than ever to build the Kingdom.”

Curity’s CEO, Travis Spencer, comments, “We’re really excited to be working with ACS Technologies. I’m confident that our product’s extensive features and standards-based approach will enable ACST to achieve their goal of stepping up security for end-users while maintaining ease of use.”

The partnership launched earlier this year will be rolled out across its products and services.

About Curity

Curity is a leading supplier of API-driven identity management, providing unified security for digital services. Curity Identity Server is used for logging in and securing millions of users’ access to web and mobile applications as well as APIs and microservices. Curity Identity Server is built upon open standards and designed for development and operations. We enjoy the trust of large organizations in financial services, telecom, retail, energy, and government services who have chosen Curity for their enterprise-grade API security needs. Visit https://curity.io/.

The post ACS Technologies selects Curity to provide seamless authentication across its end-user products appeared first on Cybersecurity Insiders.

Authentication as a baseline security control is essential for organizations to know who and what is accessing corporate resources and assets.  The Cybersecurity and Infrastructure Security Agency (CISA) states that authentication is the process of verifying that a user’s identity is genuine. In this climate of advanced cyber threats and motivated cyber criminals, organizations need […]… Read More

The post Strong Authentication Considerations for Digital, Cloud-First Businesses appeared first on The State of Security.

Most of us already know the basic principle of authentication, which, in its simplest form, helps us to identify and verify a user, process, or account. In an Active Directory environment, this is commonly done through the use of an NTLM hash. When a user wants to access a network resource, such as a file […]… Read More

The post How to Prevent High Risk Authentication Coercion Vulnerabilities appeared first on The State of Security.

Here’s a phishing campaign that uses a man-in-the-middle attack to defeat multi-factor authentication:

Microsoft observed a campaign that inserted an attacker-controlled proxy site between the account users and the work server they attempted to log into. When the user entered a password into the proxy site, the proxy site sent it to the real server and then relayed the real server’s response back to the user. Once the authentication was completed, the threat actor stole the session cookie the legitimate site sent, so the user doesn’t need to be reauthenticated at every new page visited. The campaign began with a phishing email with an HTML attachment leading to the proxy server.

Apple has previewed a new feature which aims to harden high-risk users from the serious threat of being spied upon by enemy states and intelligence agencies. “Lockdown Mode” is scheduled to arrive later this year with the release of Apple iOS 16 and macOS Ventura. It’s an optional feature for users who believe their computers […]… Read More

The post Lockdown Mode: Apple to protect users from targeted spyware attacks appeared first on The State of Security.

Thought experiment story of someone who lost everything in a house fire, and now can’t log into anything:

But to get into my cloud, I need my password and 2FA. And even if I could convince the cloud provider to bypass that and let me in, the backup is secured with a password which is stored in—you guessed it—my Password Manager.

I am in cyclic dependency hell. To get my passwords, I need my 2FA. To get my 2FA, I need my passwords.

It’s a one-in-a-million story, and one that’s hard to take into account in system design.

This is where we reach the limits of the “Code Is Law” movement.

In the boring analogue world—I am pretty sure that I’d be able to convince a human that I am who I say I am. And, thus, get access to my accounts. I may have to go to court to force a company to give me access back, but it is possible.

But when things are secured by an unassailable algorithm—I am out of luck. No amount of pleading will let me without the correct credentials. The company which provides my password manager simply doesn’t have access to my passwords. There is no-one to convince. Code is law.

Of course, if I can wangle my way past security, an evil-doer could also do so.

So which is the bigger risk?

  • An impersonator who convinces a service provider that they are me?
  • A malicious insider who works for a service provider?
  • Me permanently losing access to all of my identifiers?

I don’t know the answer to that.

Those risks are in the order of most common to least common, but that doesn’t necessarily mean that they are in risk order. They probably are, but then we’re left with no good way to handle someone who has lost all their digital credentials—computer, phone, backup, hardware token, wallet with ID cards—in a catastrophic house fire.

I want to remind readers that this isn’t a true story. It didn’t actually happen. It’s a thought experiment.