Social media giants have long held too much power over our digital identities.

Related: Google, Facebook promote third-party snooping

Today, no one is immune to these giants’ vicious cycle of collecting personal data, selling it to advertisers, and manipulating users with data metrics. By making people feel like mere products- this exploitative digital environment further encourages a bubble of distrust amongst social media users.

With numerous incidents to cite, tech behemoths have time and again proven their inadequacy to securely handle their user’s digital identity and data.

In recent years, Meta (previously Facebook) has faced a number of fines for violating user privacy. In 2019, the company was ordered to pay a record-breaking $5 billion penalty by the Federal Trade Commission (FTC) for violating consumers’ privacy rights.

The fine was the largest ever imposed on a social media company for privacy violations. Last month, again, Meta was penalized for more than €1.2bn (£1bn) and ordered to suspend data transfers to the US by an Irish regulator for its handling of user information. This hefty penalty set a record for a breach of the EU’s general data protection regulations (GDPR).

But these incidents aren’t limited to only the giants like Facebook. Even newer social networking sites like Clubhouse have allegedly had trouble protecting data of millions of users in recent times.

That’s why there is a need for more comprehensive solutions addressing challenges of user control, privacy, and data security at their core.

Decentralizing identities

Decentralized identities are a newer approach that can help solve the issues at hand. A user can create their own decentralized identity that is controlled by a secret seed phrase and not reliant on a centralized platform for that identity to exist.

A user can then connect this decentralized identity to encrypted decentralized storage to store their personal data. The data gets distributed across multiple nodes as opposed to getting stored in a central database. This direct shift of centralized authority to a decentralized landscape has several unique and necessary advantages.

Were

Firstly, it enables individuals to take complete control over their data. Users can choose where their personal information should be used and rightfully have the power to revoke that access at any time. Secondly, it adds two critical layers of security, making it comparatively tricky for hackers to steal.

For instance, to hack decentralized end-to-end encrypted data, a hacker must compromise multiple nodes on the storage network to gain access to the data. They must also compromise the user’s mobile device to access their seed phrase or perform some other type of sophisticated social engineering hack to obtain the secret seed phrase directly from the users. These steps are incredibly labor-intensive and extremely difficult and at great cost.

This radically changes the “economics” of hacking to all but eliminate the likelihoodof stealing user data. A hacker must go through the time and effort to hack multiple systems and devices to obtain the secret data of one person, rather than compromising a single system to obtain the data of millions of users.

Thirdly, it can drastically enhance and improve the user experience. Take into account the tedious tasks of creating and managing usernames and passwords for different services across all platforms. This often tempts users to reuse their old credentials.

Decentralized identity allows users to use their decentralized ID for signing in across multiple platforms, providing a better user experience. Future enhancements to decentralized single sign on will provide cryptographic proofs relating to the application being connected to, eliminating many “phishing” type of attacks.

To power all this, interoperability plays a critical role in decentralized identity systems built on open standards, such as the DID-Core standard. It promotes cross-functionality between diverse systems and platforms, meaning users get to use their decentralized identities to access a wide range of applications without going through the trouble of creating a new account for each service. Building on this idea, decentralized social identities have a massive potential to reshape the social media landscape

Social media use case

By prioritizing user ownership, privacy, and interoperability – decentralized social identities change the way we interact online. Take, for instance, a scenario where a self-owned cryptographic identity puts the control back in the users’ hands, as opposed to being controlled by a centralized entity like Facebook or Twitter. Or think of a system where your social media accounts and email are certified by a blockchain-based decentralized social identity service for secure identity verification.

This transformation is driven by self-sovereignty and interoperability, which give users control over their data and allow them to own, manage, and use it across all web platforms – Users have a single, trusted source of digital identity, which changes how they build trust, establish themselves, and cultivate their reputation on social media.

With time, more and more user-centric initiatives like Verida are smartly pushing the boundaries of decentralized social media by adopting a privacy-by-design approach and offering a full-stack development framework to help create privacy-focused applications. With the user being an important link, it fundamentally changes the power dynamics seen in traditional social media platforms.

The good news is – these efforts are not just limited to decentralized social identities concerning social media. They work as a part of a broader vision of Web3-enabled applications, striving to make messaging, personal data storage, and single sign-in a commonplace occurrence.

Web2 to Web3

Notably, Web2 and Web3’s current landscape has stark fundamental differences. While Web2 is associated with sharing, Web3 emphasizes ownership. In the current iteration, Web2 users have tools (non-data-privacy compliant ) allowing them to display where they are sharing their activities and identity, but Web3, however, is yet to provide a robust solution to simply aggregate, share, and prove these existing social identities.

Solutions like Verida One allow users to import, verify, and link their Web2 identities and metadata to Web3 dApps. This bridge now paves the way for a user-controlled, privacy-focused social media landscape.

With the bitter experiences of history and promising technology of the future, changing the current social media landscape is a critical step to enhance the trust and security of our online interactions. However, it can only be achieved if you start reclaiming control over data and demanding better from companies that profit off users’ private information.

The time has come to reject the status quo and push for a future where privacy is considered a right and not a privilege. Every social media user’s agenda should be a revolution to hold tech giants accountable for their actions.

With newer transparent technologies hitting the market, users should feel more empowered to see an alternative way out.

About the essayist: Chris Were is CEO of Verida. The Australian based tech entrepreneur has spent more than 20 years developing innovative software solutions – most recently Verida, a decentralised, self-sovereign data network.

We all get spam emails, and while it’s annoying, it’s not usually anything to worry about. However, getting a huge influx of spam at once is a warning sign. People suddenly getting a lot of spam emails may be the target of a sophisticated cyber-attack.

Related: How AI can relieve security pros

What causes spam emails? Someone leaking, stealing or selling account information can cause a sudden influx of spam emails. It may also be a part of a more targeted attack. There are four main causes of spam emails:

•Sold email: Websites sometimes sell email address information to third parties.

•Spam interaction: Previous interactions with spam are a signal to scammers. They send more messages when they know the account is active and possibly interested.

•Leaked email: Companies or third-party vendors put email address security at risk when they experience data breaches.

•Mailing list: Signing up for a mailing list may trigger spam. Even without hitting enter, simply typing the information into a website is enough for them to get ahold of it.

While these aren’t the only reasons, they’re the most common. An email address’s connection to personal information is valuable, so scammers try to access it.

Wider harm

So why does it matter if someone has your email? Typically, scammers want to get ahold of an email because it’s a gold mine of information. They can use it to trace online activity, find attached accounts and uncover personal data. And when they do so, they can bombard people with countless spam messages to cover up malicious actions or get them to abandon their addresses.

Amos

Sometimes, they can access emails even without action on their target’s part. Take the WhatsApp data breach of 2019, where hackers got the personal data of 1.5 billion people by using malware. As long as that information exists on servers somewhere, it’s a security issue.

What does a sudden influx of spam emails mean? If someone is suddenly getting a lot of spam emails simultaneously, they may be the victim of email bombing. It’s a type of distributed denial-of-service (DDoS) attack that uses a script to automatically send messages. Usually, it gets past spam filters by using legitimate websites. In that case, it would be a significant cybersecurity risk for businesses and individuals alike.

Attackers can’t access someone’s information by sending many messages simultaneously, but they can use it as cover. For example, attackers may hope people won’t notice purchase confirmations or password change requests when intermingled with an enormous amount of spam. Additionally, a sudden massive increase in traffic can compromise servers. It’s a serious cybersecurity concern.

Wise response

What should you do if you get email bombed?

There are four immediate steps people should take if they get email bombed.

•Create a second email. Once scammers have the original account information, they can take steps to get more personal and financial data. A separate email can protect information and keep things more secure. For example, people could use one to sign up for things and another for sensitive records.

•Check Your Bank Account. If someone suspects they’re a target of email bombing, they should check their bank account immediately. Reviewing recent and pending purchases reveals if anyone is attempting to use their credit card information. Ideally, they should turn off their card until they resolve the issue.

*Report and Delete. People may find it tempting to abandon their accounts when they get a sudden influx of spam emails, but that’s not the best option. It can be frustrating, but reporting and deleting everything is the best approach. They shouldn’t attempt to unsubscribe — even though it may reduce future spam — because it could install malware or direct them to an illegitimate website. Clicking anything during an attack is a cybersecurity concern.

•Change Passwords. It’s virtually impossible to know the extent of leaked information. To be safe, someone experiencing email bombing should change their accounts’ passwords. If the level of spam they’re receiving makes that impossible, they should set up multi-factor authentication instead. Any additional protection can help secure their personal information.

Spam emails are a security concern. Suddenly getting a lot of spam emails may be a signal of a DDoS attack to obtain personal data, compromise servers or misuse financial information. Individuals should be aware of the potential cybersecurity issue and secure their accounts immediately.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

Computer chips have been part of cars for a long time, but no one really cares about them until they stop working or they are late to the production line.

Related: Rasing the bar of cyber safety for autos

However, the research within IDTechEx’s “Semiconductors for Autonomous and Electric Vehicles 2023-2033” report shows that trends within the automotive industry mean consumers will soon be caring far more about what chips are in their cars. IDTechEx expects that purchasing a new vehicle will soon feel like shopping for a new laptop.

What are the main concerns when buying a laptop? For most people, it will be things like how long the battery will last, how nice the screens are, and what computer chip it comes with.

Evaluating a vehicle’s worth based on the number of cylinders, horsepower, and miles per gallon will soon be irrelevant. We already know that electric vehicles will be dominating the market soon, ticking off the choice of vehicle based on how long the battery lasts, but what about the other two criteria?

It has been hard to escape the screenification of car cabins over the past few years. Even the cheapest cars on the market are available with some kind of central display, while the top end of the market is heading towards pillar-to-pillar style screens in the front.

When IDTechEx attended CES, it was clear to see that the future cabin interior would be filled with screens. While this might sound exciting at the outset, consumers should be aware that for the automotive industry, this is a means to an end, where the end is extracting more money from drivers. The screens will facilitate more premium features in the vehicle and likely more subscriptions.

Like laptops, vehicles are also becoming gaming devices. Look at Tesla, for instance; it generated waves when it showed the Model S could run stream on its infotainment system, showing how Witcher 3 can be played when the vehicle is stationary.

Mercedes went one step further; now that it has level 3 certification in both the US and Germany, it is possible for the driver to play video games while the car drives itself, up to a speed of 40mph. However, where Tesla is playing games one might expect of a modern gaming laptop, the Mercedes allows the play of Tetris, Sudoku, and some mobile gaming titles. Perhaps Mercedes will need to step up its game.

Jeffs

As cool as it might be, gaming in vehicles is perhaps a gimmick or novelty. The real computation power in modern and future vehicles will be used to drive autonomous systems. IDTechEx’s report found that this is where the most advanced semiconductor technologies are being used.

Most computer chips around the vehicle, the kind that are used to open windows or adjust wing mirrors, use quite mature technologies. These chips have node sizes normally above 40nm (nanometers). Modern smartphones, on the other hand, have chips that use a 4nm process, with smaller processes offering more computing power. However, the coming chips that will power autonomous driving systems will be much closer to the cutting edge, using technologies in the 1-5nm region.

This is where laptops come back in. A high-performance gaming laptop might be using a graphics card from Nvidia, giants in the gaming industry. The graphics cards take information from the CPU and turn it into an image for the screen.

One of the main computational tasks for an autonomous vehicle is to take data from each of the cameras, radar, LiDAR, etc., and turn it into a 3D map of the environment and identify all the vehicles and all the people around the vehicle. These two processes are rather similar, and it means that Nvidia have been able to expand into the automotive space, offering top-end, high-performance computing for autonomous applications. Their recent announcement regarding the planned Thor product with 2,000 TOPS of computing power is an order of magnitude more powerful than most of the chips on the market today aimed at ADAS (advanced driver assistance systems) applications.

The industry is already starting to market its models based on the performance of its autonomous features. Polestar, the electric branch of Swedish automaker Volvo, have recently added details about its front-sensing capabilities to its advertising campaign. It boasts that its front radar/camera combination has a range of 200m and a field of view of 45?.

This is fairly new, and it is hard to think of another car company advertising based on its sensing capabilities. That also makes it hard to benchmark. All electric cars are marketed based on the range their batteries can deliver, making it easy to compare.

But if only one is advertising the range and field of view of its sensing, how does one know whether it is any good? Luckily IDTechEx’s “Automotive Radar 2022-2042” and “Lidar 2023-2033: Technologies, Players, Markets & Forecasts” reports go into detail on what makes a good sensor for autonomous vehicles and provide benchmarking of products on the market.

In the future, automakers like Mercedes and BMW will be selling vehicles with marketing like Nvidia inside or Mobileye inside, just like a laptop might be bought that says intel inside today. They will become system integrators, like how Lenovo will buy CPUs from Intel or AMD, RAM from Microchips, and screens from LG and put them together into its own packaging. Automakers will buy an electric powertrain from one company and an autonomous system from another and bundle it into their branded packaging. This means consumers will choose cars based on how long the battery will last, how nice the screens are, and what computer chip it comes with.

For more information, including downloadable sample pages, please visit this web page.

This research forms part of the broader mobility research portfolio from IDTechEx, who track the adoption of autonomy, electric vehicles, Semiconductors for Autonomous and Electric Cars, battery trends, and demand across land, sea and air, helping you navigate whatever may be ahead. Find out more at www.IDTechEx.com/ElectricVehicles.

About IDTechEx: IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

Media contact: Lucy Rogers, Sales and Marketing Administrator, press@IDTechEx.com,

+44(0)1223 812300

 

Accessing vital information to complete day-to-day tasks at our jobs still requires using a password-based system at most companies.

Related: Satya Nadella calls for facial recognition regulations

Historically, this relationship has been effective from both the user experience and host perspectives; passwords unlocked a world of possibilities, acted as an effective security measure, and were simple to remember. That all changed rather quickly.

Today, bad actors are ruthlessly skilled at cracking passwords – whether through phishing attacks, social engineering, brute force, or buying them on the dark web. In fact, according to Verizon’s most recent data breach report, approximately 80 percent of all breaches are caused by phishing and stolen credentials. Not only are passwords vulnerable to brute force attacks, but they can also be easily forgotten and reused across multiple accounts.

They are simply not good enough. The sudden inadequacy of passwords has prompted broad changes to how companies must create, store, and manage them. The problem is these changes have made the user experience more convoluted and complicated. In other words, we’ve lost the balance between ease-of-use and adequate security under the increasingly antiquated system of password-based access.

Under the current system, companies have two choices: subject employees to burdensome processes to access work servers or become low-hanging fruit for a cyber attack.

By choosing the former – which most companies do as a shortcut to compensate for weak passwords without having to adopt new and innovative solutions – end users must comply with unintuitive experiences such as creating complicated passwords and dealing with complex password reset procedures. I would say companies that take this shortcut are still low-hanging fruit on top of inconveniencing their employees.

Combining IDs, keys

What is the solution, then? The next big thing is passwordless authentication. Let’s remove that point of attack and start fixing the problem at the source. Many organizations have already begun to jump to passwordless, but adoption is slow, and solutions are still in their infancy.

Gagnon

On the consumer side, we see solutions that work now and are incredibly easy to use. For example, we have passwordless facial and fingerprint biometric logins on our mobile phones and the thousands of apps that we use, as well as on our laptops and similar portable devices. However, no clear passwordless solutions offer easy adoption, enterprise-grade security, and interoperability to our large corporations and critical organizations.

Security remains one of the significant issues that need to be addressed on the enterprise level. Solutions need to tackle this problem by establishing trust at the user level to the point that trust is unnecessary. That sounds counterintuitive, but that is what we need to protect organizations from the relentless attacks we are seeing.

A solution that combines biometric identification with device-bound cryptographic keys and interoperable global validation standards.By combining who the user is (through biometrics) with something they know (the cryptographic key), solutions can establish user identity with sufficient confidence at the enterprise level.

Some solutions do this today. However, security and interoperability remain an issue. First and foremost, most solutions rely on connected devices like mobile phones to authenticate users. This leaves the door open to phishing and man-in-the-middle attacks.

New standards needed

Alternatively, some organizations are adopting physical security measures to keep private keys secure and offline. However, these solutions are often criticized for their lack of ease of use, limited interoperability across organizations, and lack of support.

We must keep thinking ahead on security. Attackers will continue to find ways to breach our systems, and authentication cryptography will become increasingly vulnerable to attack. Finding new methods of validation that are resistant to quantum and AI attacks is critical. Our job is to create and implement better systems.

The bottom line is user authentication is vital for securing access to data and systems. To establish trust with the user, the future of secure authentication lies in new passwordless solutions. Emerging technology and innovation in cryptography, biometrics, and device-linked authentication will also be crucial for advancing authentication.

Furthermore, driving authentication forward in our digital ecosystem can be achieved by developing new standards, collaborating with industry peers, and raising awareness. For a system to be introduced and adopted at scale, ease of use is crucial, and security must be uncompromising. The time has come for passwordless systems that seamlessly integrate into businesses without significant user experience disruptions and provide a simple, intuitive, yet secure experience for all.

About the essayist: Thierry Gagnon is Co-Founder and Chief Technology Officer (CTO) at Kelvin Zeroa start-up redefining the way organizations interact with their users in a secure digital world. Kelvin Zero is enabling highly regulated enterprises to secure authentication and know who is on the other side of every transaction.

As the threat of cybercrime grows with each passing year, cybersecurity must begin utilizing artificial intelligence tools to better combat digital threats.

Related: A call to regulate facial recognition

Although AI has become a powerful weapon, there’s concern it might be too effective compared to human cybersecurity professionals — leading to layoffs and replacements.

However, the truth is that automated AI tools work best in the hands of cybersecurity professionals instead of replacing them. Rather than trying to use AI to get rid of your security team, seek to use automated tools in conjunction with your existing professionals to ensure the strongest cybersecurity defense.

AI breakthrough

The newest breakthrough in artificial intelligence technology is machine learning and generative AI. Unlike traditional AI, machine learning can be taught to act on data sets and make accurate predictions instead of being limited to only analyzing.

Machine learning programs use highly complex algorithms to learn from data sets. In addition to analyzing data, they can use that data to observe patterns. Much like humans, they take what they have learned to “visualize” a model and take action based on it.

A program that can take data sets and act independently has enormous cybersecurity potential. Generative AI can look for patterns in code and identify the most common forms of cyberattacks. Instead of alerting a human administrator to handle the problem, the program can eliminate the threat itself.

The greatest strength of machine learning is its adaptability. The more data it collects, the more it learns and the more threats it can stop. However, that doesn’t mean this tech is infallible. The capabilities of machine learning programs depend on how much data is available.

Role for pros

That’s why the role of cybersecurity professionals is still important. Machine learning requires human operators that teach the programs how to use relevant data. The programs also require human supervision in case it makes mistakes. Alone, machine learning is not yet strong enough to stop all determined hackers; but together, machine learning and human professionals can be a formidable force.

The benefits of machine learning programs for cybersecurity professionals are potentially enormous. Security programs that can enforce themselves to an extent instead of simply analyzing data have the potential to cut down on workloads and give professionals breathing room.

While cybersecurity has become an essential part of everyday life, it can also be hard to keep up with all the latest trends, policies and programs. This is especially true for cybersecurity professionals — whose job is to remain vigilant for threats.

These professionals are constantly bombarded with alerts and information on possible security breaches. Some of these alerts may be false — for example, the system flagged it as a potential threat but not confirmed or it was an error.

Relieving fatigue

The only way to tell if an alert is false is for the professional to check all avenues related to the threat to confirm. This process can be long and time-consuming, just to end up as a false alarm in the end.

Amos

If not addressed, cybersecurity fatigue can lead to human error. Failing to check alerts properly risks an actual threat actor breaching the system. Machine learning and AI tools can help reduce that margin of error by automating mundane tasks.

Generative AI tools can be taught the most common causes of false alarms and how to confirm them. If such an alert appears, the AI tool can check the reason by itself and report it to the administrator. This process will significantly reduce cybersecurity professionals’ workload, giving them time to address more critical issues.

While machine learning tools are potent weapons against cyber threats, they need cybersecurity professionals to wield them properly. The power of generative AI tools in the hands of security experts can defeat any cyber attack.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

To be productive in an interconnected work environment, employees need immediate access to numerous platforms both on- and off-premises.

Related: Why SMBs need to do PAM well

Keeping track of user activity and effecting proper on- and off-boarding is becoming more and more difficult, even as unauthorized access via unused, expired, or otherwise compromised access credentials has become the number one cybersecurity threat vector.

Some nine out of ten cyberattacks are estimated to begin with a threat actor gaining unauthorized access to a computer system via poorly managed access credentials.

The sophistication of cyberattacks is perpetrated through unused, old, expired, and otherwise mismanaged access credentials are increasing by the minute, at the same time as it’s becoming challenging to respond to these attacks in an organized and timely manner.

Context needed

Organizations that are used to workflow-based access systems or ticket-based systems, i.e. traditional Privileged Access Management (PAM,) must now make a big cultural shift. PAM enables granular access and monitors, detects, and alerts instances of unauthorized access through policy guardrails.

However, while PAM and other legacy access management systems do alert to unauthorized access, these warnings lack a clear picture of the user’s intent and the context behind the alert.

Today’s alert fatigue is not caused by the sheer number of alerts but by the poor quality of individual alerts.

SaaS platforms have led to very different types of user profiles over the last few years. Users are now dynamic; they move from platform to platform, and their need for access changes continuously.

Key variables

A modern access management system should handle the following:

•The sprawl of user roles and their privileges and activities, growing at the same rate as the infrastructure proliferation.

•The traditional Role-Based Access Control (RBAC) provides perpetual access based on a user’s roles – a methodology that has run its course. Even with the addition of zero-trust-based access on a granular level, RBAC is no longer enough.

•Today’s enterprise users wear multiple hats and use different software with varying privileges. The nature of these privileges has to be dynamic, or the access management system becomes a bottleneck.

•A user with a specific level of access may need to temporarily elevate their privilege because they need access to protected data to complete a task. Scaling workflow-based systems to match larger teams’ needs is difficult and creates a chaotic situation with many users simultaneously bombarding the security admins for approval.

*Some access monitoring solutions rely heavily on automated access controls, such as group policies or other sets of criteria, that will allow access requests to be processed automatically. Automation lacks the intelligence to adapt to changing user behaviors and entitlements.

Noisy ‘observability’

PAM and SIEM solutions are classic systems built on observability. But observability is no longer enough to keep your organization safe.

Observability system work by alerting to unauthorized access, but they also create a lot of extra noise, and experience shows that they are often not fully implemented. Another problem is that alerts come in after the fact and not in real time. Privileged access abuse is a hear-and-now problem that must be addressed as it happens.

One of the functions of Inside-Out Defense – Automated Moving Target Defense SaaS – is that it can immediately remediate privileged user access abuse in-line. This is accomplished by determining the context and intent behind every user activity.

Srivatsav

It provides customers, for the first time, an aggregated view of users, their profiles, and activities across different environments which is a big challenge faced by enterprises today. We provide a comprehensive 360-degree view of what every user is doing at any one time, along with an immutable forensic log, thereby enabling enterprises to stay in compliance.

At Inside-Out-Defense we know that threat actors are constantly becoming more cybersecurity sophisticated as they work to find new avenues for disruption. Current solutions focusing on static signatures of threats often miss a crucial understanding of cyber attackers’ sophisticated yet unknown behaviors. Customers need solutions like ours that can work at scale and in real-time to address some of the most persistent problems in network security.

About the essayist: Ravi Srivatsav is co-founder and CEO of Inside-Out-Defense, which emerged from stealth in April 2023 with a solution to solve privilege access abuse and provide real-time detection and remediation to today’s most prolific attack vector – privilege access abuse.

Information privacy and information security are two different things.

Related: Tapping hidden pools of security talent

Information privacy is the ability to control who (or what) can view or access information that is collected about you or your customers.

Privacy controls allow you to say who or what can access a database of customer data or employee data.

The rules or policies you put in place to make sure information privacy is maintained are typically focused on unauthorized disclosure of personal information.

Controls need to be in place to protect individuals’ privacy rights, including,  often, their right to be forgotten and be deleted from your company database.

Here are a few examples of demographic data that in combination with sensitive data makes it Personally Identifiable Information (PII).

Demographic data:

•Customer names

•Address

•Phone number

•Email address, IP address

When you combine information like that with sensitive data like below you get data that is now regulated.

•Social security number

•Passport number

•Driver’s license

•Credit card information

•Biometric data (fingerprint, eye scan, facial recognition data)

•Health records

Bruggeman

When demographic information and sensitive information are combined and then inappropriately disclosed, you end up with a data disclosure incident or a data breach. A data breach typically means the company  must notify customers and local law enforcement, often government agencies like the FTC, or Health and Human Services, or others.

Companies like Google, Facebook, Experian, Entrust, GoodRx, are companies that track what you do online, what you buy, what credit cards you have and loans you’ve taken out. They take all this private information, and then they sell it.

That’s not a data breach, that is not broken security, or a lapse of their information security program, that’s how they make money.

Information security, on the other hand, refers to something else: it is the protection of computers, information systems, networks, and data from unauthorized access, use, or damage. Information security is focused on all three elements of the CIA triad: confidentiality, integrity and availability.

Information security involves using the appropriate controls, tools, and processes to prevent or mitigate attacks, minimize or eliminate threats, and reduce vulnerabilities.

Information security has a foundation of governance, in the form of acceptable use policies and many others, that direct and govern what people can and can’t do with the technology that is in place at an organization. Once you have a solid foundation of what people can and can’t do, then you can put in the processes, procedures, tools, and technologies to implement those controls.

Now let’s look at integrity and the policies, procedures, and tools that a company needs to have to ensure that the data in the system is correct.

Think about your bank account, it is very important for you to know that when you deposit a check into your account the right amount is deposited. It is also important to the bank to make sure that the amount is correct as well, so integrity is key.

The same would be true of the prices of your products for sale on Amazon, or your own website. Making sure that the data stored in your systems maintains its integrity is critical to your information security and the continued success of your business.

Availability gets a lot of attention these days, usually when the topic of ransomware comes up. Ransomware uses encryption (typically a good thing) to make your business information un-available.

The criminals encrypt your data with a password or phrase that only they know, and then hold your data hostage until you pay a ransom. If you have a good security program in place, you have backups or other systems that protect your data from being encrypted, or in the case of some other computer incident (flood, power outage, etc.), still available for you to use.

There are a lot more details to consider in an information security program and information privacy, but the way to think about information privacy compared to information security is to understand that information privacy is focused on protecting personal information, while information security is focused on safeguarding the computer, systems, data, and networks.

About the essayist: John Bruggeman is Consulting CISO at CBTS; he is a veteran technologist, CTO, and CISO with nearly 30 years of experience building and running enterprise IT and shepherding information security programs toward maturity.

As the threat of cybercrime grows with each passing year, cybersecurity must begin utilizing artificial intelligence tools to better combat digital threats.

Related: Leveraging human sensors

Although AI has become a powerful weapon, there’s concern it might be too effective compared to human cybersecurity professionals — leading to layoffs and replacements.

However, the truth is that automated AI tools work best in the hands of cybersecurity professionals instead of replacing them. Rather than trying to use AI to get rid of your security team, seek to use automated tools in conjunction with your existing professionals to ensure the strongest cybersecurity defense.

Generative AI wild card

The newest breakthrough in artificial intelligence technology is machine learning and generative AI. Unlike traditional AI, machine learning can be taught to act on data sets and make accurate predictions instead of being limited to only analyzing.

Machine learning programs use highly complex algorithms to learn from data sets. In addition to analyzing data, they can use that data to observe patterns. Much like humans, they take what they have learned to “visualize” a model and take action based on it.

A program that can take data sets and act independently has enormous cybersecurity potential. Generative AI can look for patterns in code and identify the most common forms of cyberattacks. Instead of alerting a human administrator to handle the problem, the program can eliminate the threat itself.

Human touch needed

The greatest strength of machine learning is its adaptability. The more data it collects, the more it learns and the more threats it can stop. However, that doesn’t mean this tech is infallible. The capabilities of machine learning programs depend on how much data is available.

Amos

That’s why the role of cybersecurity professionals is still important. Machine learning requires human operators that teach the programs how to use relevant data. The programs also require human supervision in case it makes mistakes. Alone, machine learning is not yet strong enough to stop all determined hackers; but together, machine learning and human professionals can be a formidable force.

The benefits of machine learning programs for cybersecurity professionals are potentially enormous. Security programs that can enforce themselves to an extent instead of simply analyzing data have the potential to cut down on workloads and give professionals breathing room.

Relieving fatigue

While cybersecurity has become an essential part of everyday life, it can also be hard to keep up with all the latest trends, policies and programs. This is especially true for cybersecurity professionals — whose job is to remain vigilant for threats.

These professionals are constantly bombarded with alerts and information on possible security breaches. Some of these alerts may be false — for example, the system flagged it as a potential threat but not confirmed or it was an error.

The only way to tell if an alert is false is for the professional to check all avenues related to the threat to confirm. This process can be long and time-consuming, just to end up as a false alarm in the end.

If not addressed, cybersecurity fatigue can lead to human error. Failing to check alerts properly risks an actual threat actor breaching the system. Machine learning and AI tools can help reduce that margin of error by automating mundane tasks.

Generative AI tools can be taught the most common causes of false alarms and how to confirm them. If such an alert appears, the AI tool can check the reason by itself and report it to the administrator. This process will significantly reduce cybersecurity professionals’ workload, giving them time to address more critical issues.

While machine learning tools are potent weapons against cyber threats, they need cybersecurity professionals to wield them properly. The power of generative AI tools in the hands of security experts can defeat any cyber attack.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

Cyber threats have steadily intensified each year since I began writing about privacy and cybersecurity for USA TODAY in 2004.

Related: What China’s spy balloons portend

A stark reminder of this relentless malaise: the global cyber security market is on a steady path to swell to $376 billion by 2029 up from $ 156 billion in 2022, according to Fortune Business Insights.

Collectively, enterprises spend a king’s ransom many times over on cyber defense. Yet all too many companies and individual employees till lack a full appreciation of the significant risks they, and their organizations, face online. And as a result, many still do not practice essential cyber hygiene.

Perhaps someday in the not-too-distant future that may change. Our hope lies in leveraging machine learning and automation to create very smart and accurate security platforms that can impose resilient protection.

Until we get there – and it may be a decade away — the onus will remain squarely on each organization — and especially on individual employees —  to do the wise thing.

A good start would be to read Mobilizing the C-Suite: Waging War Against Cyberattacks, written by Frank Riccardi, a former privacy and compliance officer from the healthcare sector.

Riccardi engagingly chronicles how company leaders raced down the path of Internet-centric operations, and then cloud-centric operations, paying far too little attention to unintended data security consequences. Here are excerpts of my discussion with Riccardi, edited for clarity and length.

LW: Catastrophic infrastructure and supply chain breaches, not to mention spy balloons and Tik Tok exploits, have grabbed regulators’ attention. How does your main theme of tie in?

Riccardi: My book discusses how the perception of cyberattacks shifted from being mere data breaches to having real-world consequences, especially after high-profile cases in 2021, like Colonial Pipeline and Schreiber Foods.

These attacks sparked public realization that cyber threats can disrupt daily life, leading to anger against corporations, not just cybercriminals, if they failed to implement basic cybersecurity measures. My book emphasizes the heightened responsibility of C-suite leaders, considering the increased public, media, and regulator scrutiny.

LW: You come from the private sector, so you know first-hand how cybersecurity is typically viewed as a cost center and an innovation dampener. Will that have to change?

Riccardi

Riccardi: Absolutely. Cybersecurity shouldn’t be seen as a mere cost but as an existential need. Cyberattacks are increasing, and viewing cybersecurity as a cost center is a dangerous mistake. Companies can leverage cybersecurity as a business enabler and a revenue generator, like Apple and Microsoft.

It’s crucial for companies to perceive cybersecurity as a competitive advantage rather than an innovation dampener.

LW: What must SMBs and mid-market enterprises focus on?

Riccardi:  SMBs face challenges when dealing with cybersecurity implications of software-enabled, cloud-based operations due to financial and skill limitations. Cyber risks from third-party vendors further complicate the situation.

To navigate this, SMBs need to conduct an enterprise risk assessment, implement basic cybersecurity controls, train their workforce, and consider outsourcing cybersecurity to a security-as-a-service provider.

LW: You discuss password management and MFA; how big a bang for the buck is adopting best practices in these areas?

Riccardi:  Basic cyber hygiene is 90 percent of what cybersecurity is all about.  Sure, you need state-of-the-art cybersecurity technology like firewalls, anti-virus software, and intrusion detection systems to keep cybercriminals on the back foot.

The law of large numbers favors the bad guys.  A company may have thousands of employees, but it only takes one phished employee for cybercriminals to bring the network to its knees.

Strong passwords can repel a brute force attack, but MFA is the extra layer of protection when a reused password is used in a credential stuffing attack.  And when strong passwords and MFA let you down, encryption can keep sensitive data from being accessed by cybercriminals.

LW: How important is effective cybersecurity awareness training?

Riccardi:  The human factor is the weakest link in cybersecurity, and that’s why cybercriminals zero in on the company’s employees to bypass cybersecurity defenses.

Companies can prevent social engineering attacks by steeping employees in cyber hygiene and warning them about the sneaky ways cybercriminals launch cyberattacks.  Unfortunately, many cybersecurity training initiatives nose-dive because they are too technical for non-geek employees to understand.

Boring check-the-box training leads to poor employee engagement and a workforce asleep at the switch when cybercriminals come knocking.  The way to avoid this is by taking into account the human factor when designing cybersecurity training; this means making training fun and engaging and helping employees understand their roles and responsibilities in cybersecurity.

LW: Given rising compliance, led by President Biden’s cybersecurity initiatives, where do you see things going in the next 2 to 5 years?

Riccardi: In the next 2 to 5 years, I expect strenuous efforts from the Biden administration to partner with private enterprise to beef up cybersecurity across all industries.  I suspect we’ll see a carrot-and-stick approach combining incentives with regulations to cajole SMBs into adopting cyber hygiene best practices, such as MFA.

Executive accountability and liability for cyberattacks will skyrocket as ransomware progresses as a national security threat and front-page news.

SMBs are likely in a jam, as companies without the means and expertise to build a decent cybersecurity program will struggle in this regulatory environment.  However, engaging a SaaS provider may be a cost-effective way for SMBs to obtain a world-class cybersecurity function that meets compliance requirements.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

Zero trust networking architecture (ZTNA) is a way of solving security challenges in a cloud-first world.

Related: The CMMC sea change

NIST SP 800-207A (SP 207A), the next installment of Zero Trust guidance from the National Institute of Standards and Technology (NIST), has been released for public review.

This special publication was written for security architects and infrastructure designers; it provides useful guidance when designing ZTNA for cloud-native application platforms, especially those in enterprises where applications are hosted in multi-cluster and multi-cloud deployments.

I co-authored SP 207A, and it’s a great blueprint for any organization working to implement a ZTNA, whether they’re working with the U.S. federal government or not.

The 4th Annual Multi-Cloud Conference and Workshop on ZTNA is an upcoming event for anyone interested in how the federal government is advancing standards in ZTNA. The event—May 24-25; in-person and virtual—is hosted by NIST and Tetrate.

Attendees will include cybersecurity professionals, policy makers, entrepreneurs and infrastructure engineers. Registration is free and open to the public.

Useful publications

We’ve collaborated with NIST over the past four years to produce security standards in this space, resulting in several useful publications anyone can access:

•(SP 800-204A) Building Secure Microservices-based Applications Using Service-Mesh Architecture,

•(SP 800-204B) Attribute-based Access Control for Microservices-based Applications using a Service Mesh,

•(SP 800-204C) Implementation of DevSecOps for a Microservices-based Application with Service Mesh,

•(SP 800-207A) A Zero Trust Architecture Model for Access Control in Cloud-Native Applications in Multi-Location Environments (in public review)

A 10,000-Foot View

Zero trust is an approach to cybersecurity that denies access by default, granting authenticated users, devices, and applications access only to the data, services, and systems they need to do their jobs. It’s designed around the assumption that bad actors are already in the network, so it’s focused on mitigating what an attacker inside the perimeter can do via controls you can implement at runtime.

To accomplish this, we map out five runtime checks—named Identity Based Segmentation in the paper—that are made at every hop in the network. These are a minimum you should be doing to mitigate what an attacker can do even if they’re inside your network perimeter. Let’s look at each of those five.

Encryption in transit provides eavesdropping protection and payload authenticity. We want encryption in transit so no one can read sensitive data from our network traffic. More importantly, it provides message authenticity: a bad actor cannot change the data or instructions being sent.

Authentication use cases

When two applications are communicating, we want to know what those applications are. Often, we’re going to implement this using things like SPIFFE for providing cryptographic workload identity (we also get to use that identity for mTLS, accomplishing (1) at runtime too). A service mesh, like open source Istio, is a well-known way to accomplish encryption in transit and service authentication at the same time.

It’s not enough to know, for instance, that a user’s mobile phone banking app is calling their bank’s server. We must also authorize that the action the mobile app is doing is allowed on the server. Through authorization policy, we bound what an attacker can do in space: we limit how they can pivot to continue an attack across the network

It’s similarly not enough to know that the bank app running on the mobile device is allowed to talk to the bank server. We also need to know that the user is properly logged in to the application and has proven themselves to the system (they’ve authenticated themselves).

Authorization at every step

In the same way we want to authorize the banking app to call the banking server (3), we want to ensure that the user in session has the permission to take the action they’re attempting in the app – we need to make sure each action they take through our infrastructure is authorized at each step. This further helps to bound attacks in space: not only does a bad actor need to compromise an application, they also need to steal valid end user credentials with the correct capabilities to continue to pivot their attack through the network.

Butcher

To bring it all together, a common case we see is for organizations to exchange an API key for a JWT at the front door, authenticating the user as part of the exchange. Implementing Identity Based Segmentation, you must assert that the JWT remains valid and has not been tampered with at every network hop.

You then use properties of that authenticated user principle to confirm that a user remains logged in, that the action the app is executing on the server is allowed for this user, and that this communication from app to server is allowed (and that, e.g., the app is not trying to talk to a backend or database it shouldn’t). At every single hop, we ensure :

•Communication is encrypted.

•The applications communicating are authenticated and allowed to communicate.

•The user in session has been authenticated and is allowed to execute the actions being taken.

 Multi-tier policies

Importantly, in addition to these five core principles, 207A introduces the concept of Multi-Tier Policies. These are at minimum network-tier policies, like firewall and WAF, and identity-tier policies, like those you can implement with a service mesh such as Istio. By relaxing network-tier policies in exchange for adding identity-tier policies, we can maintain a same-or-better security posture while increasing organizational agility because identity-tier policies are built to be dynamic, and are easier and faster to change.

Join us on May 24 and 25 to learn how getting started with zero trust need not be a long, complex process. In fact, you can get started rather quickly, deploying real improvements quickly that deliver a measurable ROI. Given the need for security concepts that protect systems and data from attackers that are already in the network, zero trust should be something that every organization in a regulated or data-sensitive industry is taking steps toward embracing, sooner rather than later.

About the essayist: Zack Butcher is  the founding engineer of  Tetrate, which helps platform teams and developers safely and reliably transform their infrastructure for the modern, multi-cloud era.