A group of Swiss researchers have published an impressive security analysis of Threema.

We provide an extensive cryptographic analysis of Threema, a Swiss-based encrypted messaging application with more than 10 million users and 7000 corporate customers. We present seven different attacks against the protocol in three different threat models. As one example, we present a cross-protocol attack which breaks authentication in Threema and which exploits the lack of proper key separation between different sub-protocols. As another, we demonstrate a compression-based side-channel attack that recovers users’ long-term private keys through observation of the size of Threema encrypted back-ups. We discuss remediations for our attacks and draw three wider lessons for developers of secure protocols.

From a news article:

Threema has more than 10 million users, which include the Swiss government, the Swiss army, German Chancellor Olaf Scholz, and other politicians in that country. Threema developers advertise it as a more secure alternative to Meta’s WhatsApp messenger. It’s among the top Android apps for a fee-based category in Switzerland, Germany, Austria, Canada, and Australia. The app uses a custom-designed encryption protocol in contravention of established cryptographic norms.

The company is performing the usual denials and deflections:

In a web post, Threema officials said the vulnerabilities applied to an old protocol that’s no longer in use. It also said the researchers were overselling their findings.

“While some of the findings presented in the paper may be interesting from a theoretical standpoint, none of them ever had any considerable real-world impact,” the post stated. “Most assume extensive and unrealistic prerequisites that would have far greater consequences than the respective finding itself.”

Left out of the statement is that the protocol the researchers analyzed is old because they disclosed the vulnerabilities to Threema, and Threema updated it.

This is a really interesting paper that discusses what the authors call the Decoupling Principle:

The idea is simple, yet previously not clearly articulated: to ensure privacy, information should be divided architecturally and institutionally such that each entity has only the information they need to perform their relevant function. Architectural decoupling entails splitting functionality for different fundamental actions in a system, such as decoupling authentication (proving who is allowed to use the network) from connectivity (establishing session state for communicating). Institutional decoupling entails splitting what information remains between non-colluding entities, such as distinct companies or network operators, or between a user and network peers. This decoupling makes service providers individually breach-proof, as they each have little or no sensitive data that can be lost to hackers. Put simply, the Decoupling Principle suggests always separating who you are from what you do.

Lots of interesting details in the paper.

This is an actual CAPTCHA I was shown when trying to log into PayPal.

As an actual human and not a bot, I had no idea how to answer. Is this a joke? (Seems not.) Is it a Magritte-like existential question? (It’s not a bicycle. It’s a drawing of a bicycle. Actually, it’s a photograph of a drawing of a bicycle. No, it’s really a computer image of a photograph of a drawing of a bicycle.) Am I overthinking this? (Definitely.) I stared at the screen, paralyzed, for way too long.

It’s probably the best CAPTCHA I have ever encountered; a computer would have just answered.

(In the end, I I treated the drawing as a real bicycle and selected the appropriate squares…and it seemed to like that.)

Twitter is having intermittent problems with its two-factor authentication system:

Not all users are having problems receiving SMS authentication codes, and those who rely on an authenticator app or physical authentication token to secure their Twitter account may not have reason to test the mechanism. But users have been self-reporting issues on Twitter since the weekend, and WIRED confirmed that on at least some accounts, authentication texts are hours delayed or not coming at all. The meltdown comes less than two weeks after Twitter laid off about half of its workers, roughly 3,700 people. Since then, engineers, operations specialists, IT staff, and security teams have been stretched thin attempting to adapt Twitter’s offerings and build new features per new owner Elon Musk’s agenda.

On top of that, it seems that the system has a new vulnerability:

A researcher contacted Information Security Media Group on condition of anonymity to reveal that texting “STOP” to the Twitter verification service results in the service turning off SMS two-factor authentication.

“Your phone has been removed and SMS 2FA has been disabled from all accounts,” is the automated response.

The vulnerability, which ISMG verified, allows a hacker to spoof the registered phone number to disable two-factor authentication. That potentially exposes accounts to a password reset attack or account takeover through password stuffing.

This is not a good sign.

Cybersecurity teams continue to face ongoing challenges in safeguarding their networks. With increased susceptibility to cyberattacks, organizations are taking a more proactive approach to realize “zero trust,” including the U.S. Cyber Defense team.

The Pentagon recently announced the planning of a new zero-trust strategy that will be revealed in the next coming days. Specifically, the strategy will expand the Pentagon’s approach to realizing zero trust; incorporating over a hundred activities and ‘pillars’ that include applications, automation, and analytics. The strategy aims to keep critical data secure within high-risk environments.

Officials have set a five-year deadline to implement effective zero-trust solutions. With cyber capabilities of other nation states continuously improving and evolving. the U.S. is more susceptible to digital aggressiveness. The United States is aiming to meet the cyber security challenge head-on by updating the zero trust, trust and verify approach.

So, how can these strategies be implemented across the private and public sectors?

To realize, zero trust’s full potential, The  Federal Government must bear the full scope of its authority and resources to ensure the protection and security of our national and economic assets. The policy of the U.S. administration sets the precedent for how organizations should work to prevent, detect, assess and remediate cyber incidents.

Organizations can respond by aligning their current infrastructures with national cybersecurity initiatives by integrating the following tips:

Use Tools Designed to Achieve Visibility Across On-Premises and Attack Surfaces

“Last year, the White House’s Executive Order 14028, “Improving the Nation’s Cybersecurity,” recognized the need to adopt zero-trust models across federal agencies. I am excited to see our national administration continue to acknowledge the sophistication of the threat landscape and implement this new zero-trust strategy that bears the full scope of the Department of Defense (DoD)’s authority and resources in protecting and securing our data environment; shares Jeannie Warner, director of product marketing at Exabeam.

A compromise could come at any point within the ecosystem, and more often than not it will come from an adversary using valid credentials. It’s clear that “watching the watchers” in security terms is important. This is where Threat Detection, Investigation, and Response (TDIR) capabilities should be focused, and why any security operations team needs to consider having visibility of their identity management, security log management, and other threat detection tools across their on-premises and cloud attack surfaces.”

Application and API Security

“The shortcoming in the current government strategies and directives related to Zero Trust is a complete absence of consideration for the applications that ride on the cloud and data center infrastructure that gets the majority of the ZT attention. In order to achieve Zero Trust, application security and API security can’t be left out of the equation; shares Richard Bird, CSO,  of Traceable AI.

“Zero Trust without API security is simply, not Zero Trust. If energy, dollars and effort to apply Zero Trust is entirely focused on the infrastructure and OS components of cloud, data center or hybrid deployment patterns the bad actors will simply move their efforts to the attack surface that isn’t conditioned to Zero Trust. In every organization and agency on the planet, that attack surface is APIs and the applications they interact with.”

“The last several months of exploits and breaches around the world clearly show that the US government, while on the right track in driving organizations and agencies to move to the Zero Trust framework, is missing substantial direction to those same organizations as it relates to applications and APIs. The framework today overly relies on notions such as privileged access management to achieve some semblance of Zero Trust type control for applications, but this approach has proven to be woefully inadequate for user populations outside of the technology workers who access those applications.”

The Proper Authentication of Digital Assets

The key to defending an organization is not placing inherent trust in perimeter-based security systems. That’s why authorization is a critical aspect of zero-trust architecture. Integrating authorization within critical infrastructures ensures that the user accessing a system is who they claim to be and determines which individuals are granted access. This provides an extra level of security in protecting critical assets.

“In today’s world, you cannot put your trust in any static, perimeter-based security system; Gal Helemsi, CTO and CPO of  PlainID shares. “the key to defending an organization from future cyberattacks is protecting the data and the applications, by ensuring that even if a bad actor (which can be a federal employee sometimes) has gained access credentials, they don’t have automatic access to any or all data.

Let’s face it, zero-trust is the only way to secure a modern, decentralized enterprise, in which data and applications are accessed from anywhere by employees, customers, and partners.”

Implement the ‘Right’ Tools for Your Environment

Zero trust is used to denote cybersecurity paradigms that pivot from static, perimeter-based networks to users, assets, and resources. Zero trust helps reduce security breaches, by ensuring all access points are validated before a user is trusted with access to a given network. As a result, organizations rely on Zero Trust architectures to construct how users and entities are connected to organizational and agency resources. In building out robust architectures, organizations have the ability to operate under the least privilege of authorization, keeping the role and function in line with individual capabilities.

With the rise of remote and hybrid working environments, it’s essential that organizations build Zero Trust strategies and tools that acutely align with their company’s infrastructure.

Justin McCarthy, Co-Founder and CTO of Strong DM agrees that developing Zero Trust strategies is an essential step towards mitigating cyber risk. He shares; “Zero Trust security believes that a breach will inevitably occur in addition to acknowledging that threats exist both inside and outside of the network. Because of this, it continuously scans for malicious behavior and restricts user access to what is necessary to complete the task. In addition, users (including potential bad actors) are prevented from navigating the network laterally and accessing any unrestricted data.

“Some may say that Zero Trust will hinder productivity, which could be the case if backend management processes and governance operations are granted manually. But it’s the opposite if you have the right tools to make it easy to grant access and audit access control. The result of Zero Trust architecture, especially when it comes to improving the nation’s cybersecurity is higher overall levels of security, easy accessibility, and reduced operational overhead.”

In addition, with companies moving towards data-centric processes, the volume of personally identifiable data is growing exponentially. This massive amount of data is directly linked to everyday people; who often use cloud-based systems for the storage of critical aspects. This poses additional security risks.

While cybersecurity is a complex issue, a direct route to solving malicious attacks is to create strong guardrails around our sensitive data.

“Sensitive data compromise comes from cybercriminals using privileged credentials to access data repositories;” says Arti Raman, founder, and CEO of Titanium.  “Traditional methods of data security such as encryption-at-rest fail to prevent data compromise because these controls cannot distinguish legitimate users from attackers with stolen credentials. One of the most effective solutions to eliminate data compromise and implement true zero trust for data is encryption-in-use or data-in-use encryption.

“Using data-in-use encryption ensures data and IP are encrypted and protected even when it is being actively utilized, neutralizing all possible data-related leverage that attackers could gain, and limiting the blast radius of cyberattacks. Encryption-in-use is one of the strongest and most effective guardrails that can be implemented toward zero-trust data security.”

The post The Nature of Cybersecurity Defense: Pentagon To Reveal Updated Zero-Trust Cybersecurity Strategy & Guidelines appeared first on Cybersecurity Insiders.

By Jacob Ideskog, CTO at Curity

The adoption of Open Banking has increased rapidly over recent years and has had a revolutionary impact on financial institutions and on the experience consumers have when interacting with finance products. According to the OBIE 5 million people are now using Open Banking in the UK, as the benefits of the new products and services  begin to be recognized by consumers and businesses alike.

However, the rapid rise of Open Finance has also coincided with concerns about the compliance and security risk that it poses. Curity’s latest report ‘Facilitating the Future of Open Finance’ revealed that over 70% of organizations globally are concerned with security related issues associated with Open Banking. It’s clear that this is a significant hurdle that still needs to be overcome if the adoption of Open Banking is to continue its rise.

The cybersecurity sector has the opportunity and means to alleviate fears and be at the forefront of the adoption of this revolutionary technology.

Addressing and Alleviating Security Concerns

A key concern amongst businesses is the extensive involvement of third-party providers that Open Finance requires and the heightened security risks associated with this, as over 65% of organizations view this as a top security concern. Additionally 62% of organizations have concerns with outdated security systems that don’t support securely sharing data.

However such concerns, whilst understandable, don’t recognize the current capabilities of security solutions available such as Multi-Factor Authentication and the implementation of Government regulations such as PSD2 in the EU. Crucial elements of the Open Banking experience are Application Programming Interfaces (APIs). APIs enable  the efficient exchange of data between applications, services, and customers and can be safely used as long as security and access is properly secured. Acting as the backbone for Open Banking, applications built using APIs with correctly secured access allow backend communication between banks and financial institutions without the need to re-enter or re-share login details every time.

With regard to outdated security systems, investment will be crucial in addressing this issue. Reassuringly, 83% of all organizations surveyed do plan to invest more into Open Banking this year than the previous 12 months. This will not only allow them to update their security systems to meet the standards that Open Banking requires, but will also improve the customer experience and reassure potential users.

The foundations of Open Banking are rooted in providing consumers with choice of financial products and  how they control their finances. Therefore providing a service that is interoperable between brokers, banks and third party financial institutions can be used to better the customer experience, so that all parties are equipped with the information that they need is vital. Furthermore, investment into the deployment of modern authentication methods will be a key aspect of addressing consumer hesitancy due to security concerns and ensuring consumer buy in.

Communication will also play a crucial role, both internally and externally. As mentioned previously many concerns of both financial institutions and consumers are either already accounted for by security systems or have solutions that can be immediately implemented. It’s vital to ensure that education around Open Banking is improved to alleviate fears that in some cases are unfounded amongst businesses and consumers alike.

The role of the cybersecurity industry

Whilst there are clear concerns and issues amongst organizations across the globe, there is undeniably significant momentum behind the adoption of Open Banking.  With almost three quarters of organizations surveyed planning to introduce Open Banking in the next 18 months, cybersecurity professionals’ focus should be on ensuring this transition is as smooth as possible.

This momentum and clear intention from businesses to adopt and invest in Open Banking provides the cyber security sector with a significant opportunity to be at the forefront of this banking revolution. It will be vital for the industry to work closely alongside financial institutions to support this change and mitigate risk at every turn.

We can expect the adoption of Open Banking to continue in the short term, but its long term health and adoption is absolutely dependent on the ability of the industry to address the security concerns and hesitancy that exist.

There’s potential for Open Banking to have a revolutionary impact on the way businesses and consumers approach their finances and more and more institutions are set to incorporate it into their business. However, despite the clear benefits associated with Open Finance, this cannot be done at the expense of individuals’ security and protecting their personal and private data. This is why the cybersecurity sector plays such an important role. If the industry doesn’t effectively mitigate risk and alleviate fears, no matter how much enthusiasm and momentum there is behind Open Banking it will not realize its full potential.

The post Security and the Future of Open Finance: How to Improve Adoption Globally appeared first on Cybersecurity Insiders.

ACS Technologies (ACST), a leading provider of church management software and services in the United States, has announced its integration of the Curity Identity Server across its client-facing products.

The integration of the Curity Identity Server to ACST is driven by a desire to provide high-level security to end-users, with Curity enabling seamless identity and access management (IAM) and log-in and providing a number of different multi-factor authentication (MFA) flows to fit business needs. Previously, ACST relied on a home-grown solution that is currently being phased out and replaced by a cloud-native deployment of the Curity Identity Server in AWS.

By utilising the Curity Identity Server, ACST will be able to concentrate on its product development instead of spending time and resources building IAM and MFA infrastructure in-house. The integration of and investment in Curity’s easy-to-use, low-cost product demonstrates ACST’s commitment to end-user security and its dedication to continually improving its product for end-users.

On choosing Curity, Robert Gettys, Chief Product and Technology Officer at ACS Technologies, says, “We wanted to invest in the right security to help us allocate time to meeting the unique needs of churches across the country. Thanks to the excellent capabilities of the Curity Identity Server, we’ll be able to concentrate on developing our core products to serve our ministry partners rather than attempting to build IAM and MFA ourselves. With Curity’s support, we’ll enhance our customer offering and be better positioned than ever to build the Kingdom.”

Curity’s CEO, Travis Spencer, comments, “We’re really excited to be working with ACS Technologies. I’m confident that our product’s extensive features and standards-based approach will enable ACST to achieve their goal of stepping up security for end-users while maintaining ease of use.”

The partnership launched earlier this year will be rolled out across its products and services.

About Curity

Curity is a leading supplier of API-driven identity management, providing unified security for digital services. Curity Identity Server is used for logging in and securing millions of users’ access to web and mobile applications as well as APIs and microservices. Curity Identity Server is built upon open standards and designed for development and operations. We enjoy the trust of large organizations in financial services, telecom, retail, energy, and government services who have chosen Curity for their enterprise-grade API security needs. Visit https://curity.io/.

The post ACS Technologies selects Curity to provide seamless authentication across its end-user products appeared first on Cybersecurity Insiders.

Authentication as a baseline security control is essential for organizations to know who and what is accessing corporate resources and assets.  The Cybersecurity and Infrastructure Security Agency (CISA) states that authentication is the process of verifying that a user’s identity is genuine. In this climate of advanced cyber threats and motivated cyber criminals, organizations need […]… Read More

The post Strong Authentication Considerations for Digital, Cloud-First Businesses appeared first on The State of Security.

Most of us already know the basic principle of authentication, which, in its simplest form, helps us to identify and verify a user, process, or account. In an Active Directory environment, this is commonly done through the use of an NTLM hash. When a user wants to access a network resource, such as a file […]… Read More

The post How to Prevent High Risk Authentication Coercion Vulnerabilities appeared first on The State of Security.