One common misconception is that scammers usually possess a strong command of computer science and IT knowledge.

Related: How Google, Facebook enable snooping

In fact, a majority of scams occur through social engineering. The rise of social media has added to the many user-friendly digital tools scammers, sextortionists, and hackers can leverage in order to manipulate their victims.

Cybersecurity specialists here at Digital Forensics have built up a store of knowledge tracking criminal patterns while deploying countermeasures on behalf of our clients.

One trend we’ve seen in recent years is a massive surge in cases of sextortion. This online epidemic involves the blackmail of a victim by the perpetrator via material gained against them, typically in the form of nude photos and videos.

These sextortionists are some of the lowest forms of criminals, working tirelessly to exploit moments of weakness in their victims induced by loneliness and our most base-level human natures.

Since the dawn of civilization and economics, instances of fraud have always existed. Scholars have determined that the precursors of money in combination with language are what enabled humans to solve cooperation issues that other animals could not. The advancement of fraud has materialized parallel to that of currency.

Exploitation drivers

From the case of Hegestratos committing insurance fraud by sinking a ship in 300 B.C., to the Praetorian Guard selling the rights to the Roman throne in 193 AD, to the transgressions of Madoff and Charles Ponzi, fraud has always been embedded in society as a consequence of economics.

As technology has rapidly exceeded all historical imaginings, opportunities for fraudsters to exploit their victims abound. Digital exploitation refers to the abuse and manipulation of technology and the internet for illegal and unethical purposes, including identity theft, sextortion, cyberbullying, online scams, and data breaches.

The rise of digital exploitation has been a direct result of technological advancement and the widespread use of the internet in our daily lives.

Cybersecurity has similarly developed as a necessary countermeasure to prevent scammers from rampaging the privacies of citizens. Since fraudsters constantly seek new methods of exploitation, cybersecurity specialists are responsible for being identically innovative in anticipating future techniques of exploitation before they exist.

Modern measures of cybersecurity and digital forensics must not merely react to cases of fraud, but must proactively seek to exploit current systems as well in the aim of remaining vigilant against fraud-villains.

The success of digital exploitation can be attributed to several factors, including difficulty in keeping up with the latest security measures, increased reliance on technology and the internet, and a general lack of awareness and education about the dangers of the internet.

Countermeasures

To address the issue of digital exploitation, it is essential to raise awareness and educate people about the dangers of the internet, and to continue to develop and implement strong security measures to protect personal information and sensitive data.

McNulty

It may someday fall to the Federal government to deploy cybersecurity as a service such as community hubs or public utilities, but for the foreseeable future it falls upon private enterprises to assist clients suffering from a digital exploit in reclaiming their lives.

Digital Forensics experts are trained to follow digital footprints and track down IP addresses, cell phone numbers, email addresses, social media accounts and even specific devices used in these crimes. We can identify online harassers or extortionists with a high degree of success, arming clients with the evidence they need to confront a harasser, seek a restraining order or even press charges.

About the essayist: Collin McNulty is a content creator and digital marketer at Digital Forensics, a consultancy that works with law firms, governments, corporations, and private investigators

This year has kicked off with a string of high-profile layoffs — particularly in high tech — prompting organizations across all sectors to both consider costs and plan for yet another uncertain 12 or more months.

Related: Attack surface management takes center stage.

So how will this affect chief information security officers (CISOs) and security programs? Given the perennial skills and staffing shortage in security, it’s unlikely that CISOs will be asked to make deep budget or staffing cuts, yet they may not come out of this period unscathed.

Whether the long anticipated economic downturn of 2023 is a temporary dip lasting a couple quarters or a prolonged period of austerity, CISOs need to demonstrate that they’re operating as cautious financial stewards of capital, a role they use to inform their choices regardless of the reality — or theater — of a recession.

This is also a time for CISOs to strengthen influence, generate goodwill, and dispel the perception of security as cost center by relieving downturn-induced burdens placed on customers, partners, peers, and affected teams.

For CISOs to achieve these goals, here are five recommended actions:

Tie security to the cost of doing business. CISOs should not allow their board or executive team to continue believing that cybersecurity exists solely as a cost center. In other words, they shouldn’t detail how cybersecurity spending drives revenue and that cuts to the security program directly affect relationships and requirements with three key constituencies: customers, insurers, and regulators.

Instead, they should defend their security budget by quantifying investments in required security controls — and how much revenue is generated from the systems those controls protect. Ultimately, cybersecurity can become a profit center when customers, insurers, and regulators require it.

Demonstrate secure practices to customers. Your customers’ security teams are navigating the same downturn pressures. They still need to collect audit and security information from vendors and they may have fewer employees to complete the work. CISOs should prioritize security initiatives that drive the top line and increase customer stickiness, such as bot management solutions that improve customer experience, then they should inform customers of the steps taken to thwart costly application attacks.

These include such initiatives as monitoring for denial of wallet attacks in serverless functions, minimizing bot fraud, and keeping an eye on bug bounty program costs. Lastly, CISOs should automate processes such as security questionnaire responses and software bill of materials generation to give customers what they need before they ask for it.

•Support (as you influence) peers in other functions. Now is the time for CISOs to focus on key corporate objectives and ensure that their security initiatives demonstrate traceable alignment. If you didn’t start this practice in your early days as a security leader, take the time now to schedule regular meetings with peers across functions to stay current on their challenges, security needs, and points of friction.

From there, develop joint initiatives that further corporate objectives and provide services, resources, or assistance in the form of partial funding or staffing and friction-remediation efforts. This ethical politicking will make funding or resource allocation discussions more amicable. It will also extend goodwill toward the security organization in the future, when CISOs may need allies and evangelists to push through policy or process changes.

•Stop backfilling open positions (for now). No security leader wants to ask an already overwhelmed team to do more with less. Not backfilling certain roles, however, reduces costs voluntarily and minimizes the need for future involuntary cuts. For CISOs, this requires excellent communication and management skills when explaining to their teams why these roles will stay vacant.

Burn

This should include succession planning, associated upskilling, and job shadowing efforts for those who stick around. Provide an expected duration for the hiring freeze and work with regional nonprofits to bring on cost-effective cybersecurity apprentices — relieving some of the pressure while creating a pipeline of experienced talent at the ready when the freeze lifts.

•Resist the temptation to consolidate your partner ecosystem. Although cutbacks in this area may appear to be a practical cost-saving strategy, overcorrection in key areas such as cybersecurity, risk, and compliance could increase concentration risk, expose firms to disruption, and severely affect your operations. Given economists’ estimates that modern recessions last 10 months, CISOs should consider in their decision-making the time it takes to fully onboard a strategic supplier — typically six months or more — so they can ensure that they don’t miss out on opportunities when the economic pendulum swings in the opposite direction.

The outlined actions must be executed deftly at a time when instilling and maintaining trust with customers, employees, and partners is a business imperative. They also become crucial when factoring in how current geopolitical events and technology innovations continue to fuel a highly sophisticated and evolving threat landscape.

About the essayist: Jess Burn is a Forrester senior analyst who covers CISO leadership & security staffing/talent management, IR & crisis management, and email security.

APIs (Application Programming Interfaces) play a critical role in digital transformation by enabling communication and data exchange between different systems and applications.

Related: It’s all about attack surface management

APIs help digital transformation by enabling faster and more efficient business processes, improving customer experience, and providing new ways to interact with your business.

Whether an API is exposed for customers, partners, or internal use, it is responsible for transferring data that often holds personally identifiable information (PII) or reveals application logic and valuable company data.

Therefore, the security of APIs is crucial to ensure the confidentiality, integrity, and availability of sensitive information and to protect against potential threats such as data breaches, unauthorized access, and malicious attacks.

API security is essential for maintaining the trust of customers, partners, and stakeholders and ensuring the smooth functioning of digital systems. If API security is not properly implemented, it can result in significant financial losses, reputational damage, and legal consequences.

So, how can you ensure your API security is effective and enable your digital transformation?

Attack vector awareness

Hackers want to intercept and exploit API vulnerabilities to gain access to API endpoints and data. Over the last few years, we have observed that APIs are the favorite attack vector for hackers.

The losses to US companies due to API data breaches are estimated between $12 billion – $23 billion in 2022 alone, in an article in DarkReading. A study by the Marsh McLennan Cyber Risk Analytics Center and Imperva analyzed 117,000 unique cybersecurity incidents and estimated that API security issues result in US$ 41 to 75 billion of losses annually.

Why traditional approaches to securing APIs are not sufficient

Rao

As the adoption of APIs grows, the demand for security solutions increases. But, we have seen that the traditional approaches to securing APIs, such as basic authentication and IP whitelisting, are no longer sufficient in today’s rapidly evolving digital landscape.

Organizations must adopt a modern, comprehensive approach to API security that includes a combination of technical controls, policies, and processes to secure APIs effectively in today’s dynamic digital landscape.

To address this demand, a number of vendors have entered the market to provide solutions to help businesses secure their APIs. However, many of these vendors are providing solutions for managing APIs, not for securing these APIs. For example, security monitoring tools are not able to track API usage and activity. Due to this, these tools aren’t able to provide any actionable insights based on the data they collect.

API observability

Businesses can secure APIs with the “Shift-left and Automate-Right” approach across the entire API lifecycle.

Securing APIs across their entire lifecycle involves multiple stages, including design, development, testing, deployment, configuration, and maintenance. Each stage requires different security measures to ensure the confidentiality, integrity, and availability of sensitive information.

There are several models for API security that organizations can adopt to secure their APIs like a five stage approach described below;

•Discover: Make sure you have a complete view at all times. Manual tracking is hard so Automate API asset tracking to gain total visibility; proactively track and notify any changes in APIs so there is no guesswork. Also unveil the hidden topology behind API and application traffic with reconstruction.

•Observe: Start analyzing and controlling what should really exist. Detect and alert for Zombie & shadow APIs within the ecosystem. Things can break anytime so having 360 degree API observability for SRE is important. Start with basics to secure against OWSAP Top10 and key them updated.

•Model: Define organization-wide best practices with the flexibility to extend by the domain teams. Define data constraints to protect against attacks. Detect and prevent suspicious activity before it causes damage. Measure and protect your API with rate limiting, authZ and authN, data validation, versioning, and error handling.

•Act: Enforce best practices seamlessly without becoming a bottleneck to Increase API accuracy & resilience. Set up API Audits to track API calls, API responses, API errors, and API data accuracy. Monitor API for detailed API reports on API health, API usage, and API Performance. Automate API testing to track API behavior.

Insights: Derive insights holistically and not just at each API level. Maintain high standards with automated maturity scorecards. Make service ownership a reality. Set standards, give guidance, and measure adoption. Build a Culture of Continuous improvement.

By implementing security measures at each stage of the API lifecycle, organizations can ensure that their APIs are secure, and that sensitive information is protected against potential threats.

Together, these elements form the foundation for a practical approach to securing APIs. Continually reviewing your API security is a best practice for good governance.

About the essayist:  Rakshith Rao is the co-founder and CEO of API lifecycle management tool APIwiz. Rak brings 17 years of experience in enterprise technical sales leadership, including at Apigee and Google, DataStax, and HP.

The IT world relies on digital authentication credentials, such as API keys, certificates, and tokens, to securely connect applications, services, and infrastructures.

Related: The coming of agile cryptography

These secrets work similarly to passwords, allowing systems to interact with one another. However, unlike passwords intended for a single user, secrets must be distributed.

For most security leaders today, this is a real challenge. While there are secret management and distribution solutions for the development cycle, these are no silver bullets.

Managing this sensitive information while avoiding pitfalls has become extremely difficult due to the growing number of services in recent years. According to BetterCloud, the average number of software as a service (SaaS) applications used by organizations worldwide has increased 14x between 2015 and 2021. The way applications are built also evolved considerably and makes much more use of external functional blocks, for which secrets are the glue.

Poor practices

In the field, people often copy and paste secrets into configuration files, scripts, source code, or private messages without considering the consequences. Source code repositories are cloned and take with them hard-coded credentials, resulting in an explosion of “secrets sprawl.”

To understand the magnitude of the problem, each year, GitGuardian publishes the number of secrets that have been mistakenly published on GitHub, the world’s first code-sharing platform. Thus, in 2021, more than 6 million secrets have leaked between the lines of code of developers, that is to say, more than 16,000 per day on average!

The projects hosted by the platform are mostly personal projects or open-source repos. Still, it is important to understand that these errors slip in easily and are difficult to identify and resolve. Even the most experienced developers can inadvertently publish this extremely sensitive information, giving access to the resources of the companies they work for.

Security specialists try to warn against the problem. Still, today the priority of boards of directors is to deliver value to customers faster than the competition, which means accelerating the development process. Combining flexibility and security is the source of all compromises, including when it comes to managing secrets.

It can be difficult to know where to start. That’s why we created a framework to help security managers evaluate their current posture and take steps to strengthen their enterprise secrets management practices.

Mitigating errors

You can start right away here with a straightforward (and confidential) questionnaire. The linked white paper explains the three stages of this process:

•Assessing secrets leakage risks

•Establishing modern secrets management workflows

•Creating a roadmap to improvement in fragile area

This model emphasizes that secrets management is more than just how an organization stores and shares secrets. It is a program that must coordinate people, tools, and processes, and also account for human error. Errors cannot be prevented, but their effects can be. That is why detection, remediation tools and policies, and secrets storage and distribution, are the foundations of our maturity model.

Segura

If you are wondering why secrets in code should be a priority among so many other vulnerabilities, just look at the recent security incidents of 2022: several major companies experienced the fragility of secrets management.

In September, an intruder accessed Uber’s internal network and found hardcoded admin credentials on a network drive. These secrets enabled the attacker to log in to Uber’s privileged access management platform, where many more plaintext credentials were stored. This gave the attacker access to Uber’s admin accounts in AWS, GCP, Google Drive, Slack, SentinelOne, HackerOne, and more.

In August, LastPass suffered a similar attack. Someone stole its source code which exposed development credentials and keys. Later in December, LastPass revealed that an attacker had used the stolen source code to access and decrypt customer data.

In fact, source code leaks caused major issues for many organizations in 2022. NVIDIA, Samsung, Microsoft, Dropbox, Okta, and Slack were among those affected. In May, we warned about the large number of credentials that could be harvested from these codebases: with these credentials, attackers can gain leverage and move into dependent systems in what is known as supply chain attacks.

In January 2023, CircleCI was breached. Hundreds of the continuous integration provider’s customers’  variables, tokens, and keys were compromised. CircleCI urged its customers to change their passwords, SSH keys, and any other secrets stored on or managed by the platform. Victims had to find out where these secrets were and how they were being used to take emergency action. This highlighted the need for an emergency plan.

Taking secrets seriously

Attacks have become more sophisticated, with attackers recognizing that compromising machine or human identities yields a higher return on investment. This is a warning sign of the need to address hardcoded credentials and secrets management.

Cybersecurity teams are taking hard-coded secrets in source code seriously. Companies understand that source code is now one of their most valuable assets and must be protected. A breach could result in business continuity issues, reputation damage, and legal proceedings.

The increasing prevalence of code and services means that software- and code-related risks will not dissipate any time soon. Hackers now target software practitioners’ credentials to gain access to IT infrastructure.

To combat these challenges, organizations must have visibility into vulnerabilities at all levels. This requires going beyond traditional practices and involving developers, security engineers, and operations in detection, remediation, and prevention.

Organizations must be prepared for secrets sprawl and have the right tools and resources in place to detect and remediate any issues in a timely manner. It’s time to take action!

About the essayist: Thomas Segura’s passion for tech and open source led him to join GitGuardian as technical content writer. Having worked both as an analyst and as a software engineer consultant for major French companies, he now focuses on clarifying the transformative changes that cybersecurity and software are going through.

A new generation of security frameworks are gaining traction that are much better aligned to today’s cloud-centric, work-from-anywhere world.

Related: The importance of ‘attack surface management’

I’m referring specifically to Secure Access Service Edge (SASE) and Zero Trust (ZT).

SASE replaces perimeter-based defenses with more flexible, cloud-hosted security that can extend multiple layers of protection anywhere. ZT shifts networks to a “never-trust, always-verify” posture, locking down resources by default and requiring granular context to grant access.

With most business applications and data moving to cloud and users connecting from practically anywhere, SASE and Zero Trust offer more versatile and effective security. Assuming, of course, that they work the way they’re supposed to.

Effective testing

Modern SASE/ZT solutions can offer powerful protection for today’s distributed, cloud-centric business networks, but they also introduce new uncertainties for IT. Assuring performance, interoperability, resilience, and efficacy of a SASE implementation can be tricky.

What’s more, striking the right balance between protecting against advanced threats and ensuring high Quality of Experience (QoE) is not easy when new DevOps/SecOps tools are pushing out a 10X increase in software releases.

Effective testing becomes critical. Today’s highly distributed, intensely dynamic environment results in potentially thousands of hybrid cloud test cases that need to be continually verified. IT and security teams must address:

SASE assurance: Most Managed Security Service Providers (MSSPs) are bound by service-level agreements (SLAs) for the services they deliver, including SASE. Since there are no standard SASE key performance indicators (KPIs,) just determining how to validate SASE behavior can be problemat

ZT behavior: ZT frameworks grant access based on identity, policy, and context. Each of these elements must be validated across multiple security controls, like next-generation firewall (NGFW) and data loss protection (DLP) tools. Once again, there is no standard set of ZT test cases to guide this validation.

SASE applications: Applying strong security without impeding performance requires an understanding of the footprint, scalability, and robustness of different SASE application services in different cloud environments; these include NGFWs, application firewalls, secure web gateways, and more.

Edge NFs: Even when offered as a single “solution,” SASE edge clouds can include multiple proprietary NFs (SD-WAN, NGFW, ZT) each with its own API and management tool. These all need to be validated.

Security policy: Successfully enforcing policy in a SASE environment starts with validating security rule sets. With evolving threats and ongoing network changes, that can’t be a one-time job. Next-gen automated test tools can be leveraged to continually re-validate policies.

Testing principles

Clearly, SASE/ZT testing merits serious consideration, and the right test cases for one organization won’t necessarily map to another. Here are four pillars of effective SASE testing:

Test across all deployment environments. SASE architectures must be validated end to end—from users and branches, through SASE points of presence, to cloud application servers. Additionally, performance needs to be profiled across all networks and SASE behavior measured across all architectures—virtualized, containerized, and bare metal

Jeyaretnam

Test for the real world. Specific SASE KPIs unique to a company’s operating environment need to be identified. Simulating generic traffic patterns can be misleading. Care must be taken to ensure testing reflects real-world network and application traffic profiles.

Accurately simulate vulnerabilities. Realistic threat models likewise should be used to validate SASE security efficacy—including simulating the evasion and obfuscation techniques that real hackers use. And since malware and vulnerabilities constantly change, threat models must continually evolve too.

Prioritize QoE. The best all-around metric for SASE/ZT testing is QoE, as it reflects multiple underlying factors, including performance, error detection, encryption variability, overall transaction latency, and (for ZT) concurrent authentication rate. Security controls that impede important business activities, will motivate users to try to bypass them.

Despite the complexity of SASE/ZT validation, it’s easy to understand what effective testing looks like. The right tools in place can continually test a full range of use cases across all environments.

Organizations can draw on a new generation of automated, always-on SASE/ZT testing tools. These systems integrate automated continuous security and QoE providing the dynamic protection companies expect and need.

About the essayist: Sashi Jeyaretnam is Senior Director of Product Management for Security Solutions, at Spirent,  a British multinational telecommunications testing company headquartered in Crawley, West Sussex, in the United Kingdom.

Of the numerous security frameworks available to help companies protect against cyber-threats, many consider ISO 27001 to be the gold standard.

Related: The demand for ‘digital trust’

Organizations rely on ISO 27001 to guide risk management and customer data protection efforts against growing cyber threats that are inflicting record damage, with the average cyber incident now costing $266,000 and as much as $52 million for the top 5% of incidents.

Maintained by the International Organization for Standardization (ISO), a global non-governmental group devoted to developing common technical standards, ISO 27001 is periodically updated to meet the latest critical threats. The most recent updates came in October 2022, when ISO 27001 was amended with enhanced focus on the software development lifecycle (SDLC).

These updates address the growing risk to application security (AppSec), and so they’re critically important for organizations to understand and implement in their IT systems ASAP.

Updated guidance 

Let’s examine how to put the latest ISO guidance into practice for better AppSec protection in enterprise systems. Doing so requires organizations to digest what the ISO 27001 revisions mean for their specific IT operations, and then figure out how best to implement the enhanced SDLC security protocols.

The new guidance is actually spelled out in both ISO 27001 and ISO 27002 – companion documents that together provide the security framework to protect all elements of the IT operation. The focus on securing the SDLC is driven by the rise in exploits that target security gaps in websites, online portals, APIs, and other parts of the app ecosystem to exfiltrate data, install ransomware, inflict reputational damage, or otherwise degrade enterprise security and the bottom line.

The revised ISO standard now stipulates more-robust cybersecurity management systems that reach all the way back into the SDLC to ensure that applications are inherently more secure as developers build them. In fact, for the first time, security testing within the SDLC is specifically required. And ISO 27001 specifies this testing should go beyond traditional vulnerability scanning toward a more multi-level and multi-methodology approach.

Achieving compliance

In seeking to secure the SDLC for ISO compliance, organizations will likely need to rely on a spectrum of testing tools working together to identify and prioritize the most critical threats. Here are 3 strategic priorities to help guide these efforts:

•Take a comprehensive, multi-level and multi-methodology approach – This includes employing multiple types of security testing in a single scan; setting up secure version control with formal rules for managing changes to existing systems; and applying security requirements to any outsourced development.

•Promote secure and agile coding practices – This includes subjecting deployed code to regression testing, code scanning, penetration, and other system testing; defining secure coding guidelines for each programming language; and creating secure repositories with restricted access to source code.

•Infuse security into application specifications and development workflow – This includes defining security requirements in the specification and design phase; scanning for vulnerable open-source software components; and employing tools that detect vulnerabilities in code that is deployed but not activated.

Comprehensive scanning

At the CTO and CIO level, these principles help guide the enterprise-wide strategy for ISO compliance. At the developer level, they will fundamentally reshape how programmers do their work day in and day out – including employing more project management tools and secure system architecture frameworks to track and mitigate risks at any stage in the SDLC.

Sciberras

The key throughout is to adopt a more holistic and comprehensive testing approach that aligns with the ISO 27001 requirements, since traditional vulnerability scanning is not powerful or proactive enough to secure the SDLC. The easiest way for organizations to mature their capabilities along these lines is to integrate a range of advanced AppSec testing protocols.

For example, the right AppSec partner can empower security teams with a blend of dynamic application security testing (DAST), interactive application security testing (IAST), and software composition analysis (SCA) together in a single scan. These combined testing approaches help secure all stages of development, as well as production environments, without negatively impacting delivery times.

Recent updates to the ISO 27001 standard bring a much-needed focus to securing the entire SDLC. In working to comply with the revised standard, security and development teams are realizing that a blend of multiple, complementary testing protocols is needed to catch and even prevent issues far earlier in the development process.

These efforts will help elevate security right alongside achieving the designed functionality as the ultimate goals in every DevOps project.

About the essayist: Matthew Sciberras, CISO, VP of Information Security, at Invicti Security, which supplies advanced  DAST+IAST (dynamic+interactive application security testing) solutions that helps organizations of all sizes meet ISO 27001 compliance.

Well-placed malware can cause crippling losses – especially for small and mid-sized businesses.

Related: Threat detection for SMBs improves

Not only do cyberattacks cost SMBs money, but the damage to a brand’s reputation can also hurt growth and trigger the loss of current customers.

One report showed ransomware attacks increased by 80 percent in 2022, with manufacturing being one of the most targeted industries. Attack that drew public scrutiny included:

•Ultimate Kronos Group got sued after a ransomware attack disrupted its Kronos Private Cloud payment systems, relied upon by huge corporations such as Tesla, MGM Resorts and hospitals That ransomware attack shut down payroll and human resources systems.

•The Ward Hadaway law firm lost sensitive client data to ransomware purveyors who demanded $6 million, or else they’d publish the data from the firm’s high profile clients online.

•The Costa Rican government declared a national emergency, after attackers crippled govenrment systems and demanded $20 million to restore them go normal.

•The Glenn County Office of Education in California suffered an attack limiting access to its own network. They paid $400,000 to regain access to accounts and protect prior and current students and teachers, whose Social Security numbers were in the data.

Amos

These are just a handful of examples of ransomware attacks in the last year. Some victims paid the ransom while others restored their systems without payment. Those that paid the blackmailers came to the conclusion that  restoring revenue generating operations, via rewarding criminals, was their best option.

Why not to pay

However, the U.S. Department of the Treasury warns against paying ransoms, citing the 37% annual increase in reported cases and 147% increase in costs. Paying doesn’t guarantee your business won’t be hacked again. It also spurs on the cybercriminals, showing them such attacks are profitable.

The U.S. Treasury says paying ransomware ransoms just encourages hackers to come up with bigger and bolder demands over time.

So wWhy would a business pay out money instead of cleaning up the mess and securing its systems? Some reasons include:

•Lack of resources to clean up the hacked files.

•Loss of money from downtime exceeds the ransom.

•To prevent damaging information from becoming public

Many business owners are also embarrassed they allowed criminals into their systems. They worry it makes them look careless and they want to cover the situation up by whatever means necessary.

Disincentivizing payment

What are some key ways of discouraging businesses from paying ransoms? Teach them to keep a full backup of all data. It’s much easier to restore lost information if the brand has a copy of it.

A plan of action is vital in the case of any hack. Taking steps to lock down information fast minimizes damage. Send out immediate notices to customers and ask them to reset their passwords, and inform them their data may be exposed to the dark web.

Report any hacking attempts or ransomware demands to the FBI or the authority in the business’s location.

Paying ransom to hackers only encourages them to attack other business owners, governments, and educational institutions. It’s best to stay away from paying out any funds in cryptocurrency or otherwise. Lean toward spending money on cleanup and restoration rather than a payoff.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

The United States will soon get some long-awaited cybersecurity updates.

Related: Spies use Tik Tok, balloons

That’s because the Biden administration will issue the National Cyber Strategy within days. Despite lacking an official published document, some industry professionals have already seen a draft copy of the strategic plan and weighed in with their thoughts. Here’s a look at some broad themes to expect and how they will impact businesses:

•New vendor responsibilities.  Increased federal regulation puts more responsibility on hardware and software vendors compared to the customers who ultimately use their products.

Until now, people have primarily relied on market forces rather than regulatory authority. However, that approach often leads to bug-filled software because makers prioritize new product releases over ensuring they’re sufficiently secure.

These changes mean business representatives may see more marketing materials angled toward what hardware and software producers do to align with the new regulations. Product labeling may also become easier to understand, acting somewhat like food nutrition labels, except centered on security principles.

Coverage of the strategic security program from people with firsthand knowledge of the draft document suggests congressional action or executive authority will regulate how all critical sectors handle cybersecurity. It’s still unclear what that looks like in practice, but it certainly signifies a major change.

•Expanded cybersecurity budgets. Statistics suggest almost 50 percent of employees have never received cybersecurity training. It’s also easy to find research elsewhere highlighting how workers frequently make errors that might seem meaningless but ultimately expose files or corporate networks to cyberattacks and other risks.

The heightened awareness as more people became aware of the Biden administration’s plan helped spur a change that caused elevated stock market activity for several cybersecurity companies. This may have happened because people at more companies recognized the need for such products. After all, cybersecurity awareness training for employees is vital, but it can only go so far. Businesses must also invest in specialized tools for network monitoring and security.

However, those familiar with the content of the strategic cybersecurity program say not to expect uniform standards to apply across industries. Previous U.S. presidents have tried that without getting the desired effects. That means it’s best to wait and see Biden’s intentions before increasing cyber investments.

•Critical infrastructure revisions. Analysts also believe part of Biden’s strategy for cybersecurity will rewrite a policy from President Obama’s era that provides stipulations for keeping essential infrastructure secure. It may also include details about which types of companies fall into that category. If so, entities like cloud providers might need to take additional steps to maintain security. The same would likely be true for utility, telecommunications and transportation businesses.

Flynn

However, it’ll take a while to implement even once the Biden administration’s plan is officially published. That gives all affected companies time to make any necessary adjustments, regardless of whether they’re categorized as critical infrastructure providers.

People working at businesses highly likely to need stronger cybersecurity under the new strategy should consider consulting with cybersecurity experts. Those parties can advise them about where gaps remain and how the business is already doing well by following best practices for security.

Big changes lie ahead for U.S. cybersecurity policies and practices. The previewed content of cybersecurity plans from the Biden administration indicates people should expect significant shifts from what past leaders have tried. However, even once the details of this cybersecurity strategic plan are publicized, it’ll take a while before whatever’s different is widely adopted. Business leaders should be ready to act but refrain from making any relevant decisions before getting the details straight from the source.

About the essayist: Shannon Flynn is managing editor of ReHack Magazine. She writes about IoT, biztech, cybersecurity, cryptocurrency & blockchain, and trending news.

When a company announces layoffs, one of the last things most employees or even company owners worry about is data loss.

Related: The importance of preserving trust in 2023

Valuable or sensitive information on a computer is exposed to theft or to getting compromised. This can happen due to intentional theft, human error, malware, or even physical destruction of servers. But it’s a real and growing risk to be aware of.

In 2020, Forbes reported that pandemic layoffs and remote work served to increase the risk of company data loss. Tesla, for example, suffered two cybersecurity events after layoffs back in 2018.

Data loss isn’t necessarily spiteful. Imagine an employee creates a spreadsheet showing all your clients and the main points of contact for each. She updates this sheet, but forgets to share it internally.

She gets laid off, and she takes the spreadsheet with her because she believes that the work she created at her job belongs to her. This may sound like an edge case, but a survey by Biscom found that 87 percent of employees took data that they themselves had created from their last job.

Data theft can also be deliberate and malicious. That same employee might use that spreadsheet as a bargaining chip in securing a new job with your competitor.

Data theft can also happen as a result of hackers. In the infamous 2014 Sony hack, an employee moving from Deloitte to Sony allegedly took sensitive data with him when he left. It is believed that the employee was storing employee information from both Sony and Deloitte in his computer, leading to the salaries of 30,000 Deloitte employees being leaked.

Data loss prevention is a concept that’s been around since the ‘90s, but in the age of AI, machine learning, natural language processing, and all those other fun new buzzwords, it’s taken on new relevance and significance.

With relaxed security measures due to remote work, disgruntled employees due to sudden mass layoffs, and logistical oversights due to reorganization, company data can fall through the cracks. To keep up, companies need to use technology to ensure their most important asset, their information, is safe.

Consolidated visibility

Eisdorfer

The first step is to know what you have. Then you can work on protecting it.

That’s why the first step in any layoff-proof data loss prevention strategy has to be the collection and categorization of all the company data that exists. This is both easier and harder thanks to a distributed system of information.

Data might be in spreadsheets, on Slack, on OneDrive, in custom databases, or any other number of off-premises cloud systems.

The best way to consolidate all that info is to use machine learning and artificial intelligence. First, identify all potential sources of data. You might also want to ensure you’re scanning all emails going in and out of the company.

Then, companies need to set up rules to determine what the AI identifies as what kind of data. For example, one priority is identifying personally identifiable information of your customers. You don’t want that leaving your data warehouses.

Another example is any kind of proprietary algorithm or system. For instance, if you’re Equifax, you don’t want any employee able to leave with your credit score algorithm.

Using a combination of AI and ML, you should be able to put together a comprehensive catalog of all company data.

Spotting anamolies

The next step is to train the AI to spot suspicious-looking behavior. For example, you might set it up so that when an employee starts downloading massive amounts of data, that gets flagged as suspicious.

You might also need to use technology that can use optical character recognition (OCR). For example, imagine instead of sharing that customer spreadsheet, our laid-off employee just takes a screenshot of it and emails it to herself.

Unless your data loss prevention strategy has OCR to read what screenshots are, you’d never be able to know that she walked off with that spreadsheet unless you manually went through every single one of her emails.

You also have to take steps to stop data loss from happening. For example, your system should include a rule to automatically log out any users downloading a high number of files. It should also limit access for any soon-to-be laid off employees to sensitive material.

And finally, in the case of non-malicious theft, you should be able to quickly scan any employee-generated data to ensure files like comprehensive customer databases don’t get lost just because nobody knows they exist.

One major component of data loss prevention is to map the organization’s critical information. With a map of who has access to what, the knowledge is less likely to get lost when employees move on. This enables companies to classify the information and prevent data loss, or at least educate employees not to take data with them to their next job.

You should also have set up your system to flag suspicious events, such as the mass downloading of files, laid-off employees sending lots of emails, or people logging in from unusual locations.

Your final step is to patch those holes. With AI on the case, it will auto-recognize suspicious events and take care of them. You can also be assured that important or sensitive information won’t fall through the cracks of mass layoffs.

Data loss is a real threat. Make sure your company is up to the job of handling it.

About the essayist: Guy Eisdorfer, is the co-founder and CEO of Cognni, a supplier of AI-powered data classification systems  and other security products to enterprises and SMBs.

When a company announces layoffs, one of the last things most employees or even company owners worry about is data loss.

Related: The importance of preserving trust in 2023

Valuable or sensitive information on a computer is exposed to theft or to getting compromised. This can happen due to intentional theft, human error, malware, or even physical destruction of servers. But it’s a real and growing risk to be aware of.

In 2020, Forbes reported that pandemic layoffs and remote work served to increase the risk of company data loss. Tesla, for example, suffered two cybersecurity events after layoffs back in 2018.

Data loss isn’t necessarily spiteful. Imagine an employee creates a spreadsheet showing all your clients and the main points of contact for each. She updates this sheet, but forgets to share it internally.

She gets laid off, and she takes the spreadsheet with her because she believes that the work she created at her job belongs to her. This may sound like an edge case, but a survey by Biscom found that 87 percent of employees took data that they themselves had created from their last job.

Data theft can also be deliberate and malicious. That same employee might use that spreadsheet as a bargaining chip in securing a new job with your competitor.

Data theft can also happen as a result of hackers. In the infamous 2014 Sony hack, an employee moving from Deloitte to Sony allegedly took sensitive data with him when he left. It is believed that the employee was storing employee information from both Sony and Deloitte in his computer, leading to the salaries of 30,000 Deloitte employees being leaked.

Data loss prevention is a concept that’s been around since the ‘90s, but in the age of AI, machine learning, natural language processing, and all those other fun new buzzwords, it’s taken on new relevance and significance.

With relaxed security measures due to remote work, disgruntled employees due to sudden mass layoffs, and logistical oversights due to reorganization, company data can fall through the cracks. To keep up, companies need to use technology to ensure their most important asset, their information, is safe.

Consolidated visibility

Rittman

The first step is to know what you have. Then you can work on protecting it.

That’s why the first step in any layoff-proof data loss prevention strategy has to be the collection and categorization of all the company data that exists. This is both easier and harder thanks to a distributed system of information.

Data might be in spreadsheets, on Slack, on OneDrive, in custom databases, or any other number of off-premises cloud systems.

The best way to consolidate all that info is to use machine learning and artificial intelligence. First, identify all potential sources of data. You might also want to ensure you’re scanning all emails going in and out of the company.

Then, companies need to set up rules to determine what the AI identifies as what kind of data. For example, one priority is identifying personally identifiable information of your customers. You don’t want that leaving your data warehouses.

Another example is any kind of proprietary algorithm or system. For instance, if you’re Equifax, you don’t want any employee able to leave with your credit score algorithm.

Using a combination of AI and ML, you should be able to put together a comprehensive catalog of all company data.

Spotting anamolies

The next step is to train the AI to spot suspicious-looking behavior. For example, you might set it up so that when an employee starts downloading massive amounts of data, that gets flagged as suspicious.

You might also need to use technology that can use optical character recognition (OCR). For example, imagine instead of sharing that customer spreadsheet, our laid-off employee just takes a screenshot of it and emails it to herself.

Unless your data loss prevention strategy has OCR to read what screenshots are, you’d never be able to know that she walked off with that spreadsheet unless you manually went through every single one of her emails.

You also have to take steps to stop data loss from happening. For example, your system should include a rule to automatically log out any users downloading a high number of files. It should also limit access for any soon-to-be laid off employees to sensitive material.

And finally, in the case of non-malicious theft, you should be able to quickly scan any employee-generated data to ensure files like comprehensive customer databases don’t get lost just because nobody knows they exist.

One major component of data loss prevention is to map the organization’s critical information. With a map of who has access to what, the knowledge is less likely to get lost when employees move on. This enables companies to classify the information and prevent data loss, or at least educate employees not to take data with them to their next job.

You should also have set up your system to flag suspicious events, such as the mass downloading of files, laid-off employees sending lots of emails, or people logging in from unusual locations.

Your final step is to patch those holes. With AI on the case, it will auto-recognize suspicious events and take care of them. You can also be assured that important or sensitive information won’t fall through the cracks of mass layoffs.

Data loss is a real threat. Make sure your company is up to the job of handling it.

About the essayist: Dr. Danny Rittman, is the CTO of GBT Technologies, a solution crafted to enable the rollout of IoT (Internet of Things), global mesh networks, artificial intelligence and for applications relating to integrated circuit design.