Microsoft has announced an early access program for its LLM-based security chatbot assistant: Security Copilot.
I am curious whether this thing is actually useful.
Microsoft has announced an early access program for its LLM-based security chatbot assistant: Security Copilot.
I am curious whether this thing is actually useful.
By Jeff Chan, Vice President of Technology, MOXFIVE
If you haven’t experienced a ransomware attack, it’s likely only a matter of time. Adding insult to injury, you will receive no warning. One minute the team is working hard to end the day, the next, your SaaS apps stop working, network access disappears, and the phones of each member of the security team start ringing.
That’s when all evening plans are canceled, and the coffee is brewed because getting systems back in order will likely be an all-night affair. This response is only natural since every second the systems are down, they are crippling to the business.
It is precisely when teams begin to scramble that mistakes are made. From my own experience, there are two critical missteps that I see time and again. First, they lose sight of three key protocols that are critical to follow when responding to an incident—containment, forensics, and recovery.
Second, they take a siloed containment approach as if containment, forensics, and recovery are all independent entities. For example, when an attack occurs, one group focuses solely on recovery, where the mantra is “recover at all costs.” In parallel, the remaining teams dive into forensics and containment, where their focus is keeping the data intact for the investigation. Operating on their islands, each group conducts a damage assessment, determines the underlying causes, kicks off damage containment, and inevitably cuts off all outside communication.
This approach isn’t wrong. All these response activities are valid and essential. What’s missing is balance across these three primary functions. While it might seem counter-intuitive, combining the three will ultimately accelerate the process and help ensure a smoother resolution. The following aims to show why giving equal focus to each area is so vital, starting with containment.
Containment: For anyone who has never conducted a forensic investigation, the aim is to find Indicators of Compromises (IOCs) which are essentially evidence that malicious activity exists. This could come in the form of unrecognized files in the system or unusual traffic, and they help guide containment measures designed to prevent further damage. One potential action could be for the forensic team to deploy an Endpoint Detection and Response (EDR) solution that can determine what’s been affected. That team then shares its findings with the containment group, which then gets to work. This process helps connect teams that may have previously been disjointed and deliver a more comprehensive response.
Recovery: To recover impacted systems, you need input from the containment team. More specifically, insights into their efforts, such as installing EDR on a restored system before putting it back into production. So, as these IOCs get identified by the forensics team, they are then fed into the EDR solution along with any other applications by the containment team. From there, the recovery team can go about restoring systems without being concerned about potential reinfection. They can then use the EDR to see if any of these indicators trigger on that system before putting it back into production. Without any indicators, recovering is a lot riskier. On the other hand, if a business decision is made to collect all IOCs before systems go back online, it will take longer to get the IT infrastructure up and running, which will cause increasing revenue loss.
Forensics: During recovery, the collection of all forensic data is done by the recovery team, and it must be completed before any system restoration efforts are commenced. This helps the forensics team identify any other IOCs that may be present and then connect with the containment team and help determine what occurred and how it started so teams can take the necessary steps to tighten the perimeter.
The theme through this process is that each of these teams is connected, collaborating in an ongoing process where each area is equally balanced, and the process doesn’t stop until the incident is fully resolved. When one group takes precedence over the others, this process begins to break down, which can have a deleterious effect on the business.
The post Incident Management Chronicles: Striking The Right Balance appeared first on Cybersecurity Insiders.
New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:
Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.
So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.
EDITED TO ADD (6/13): A podcast interview with two of the authors.
News is breaking about a software supply chain attack on the 3CX voice and video conferencing software. 3CX, the company behind 3CXDesktopApp, states to have more than 600,000 customers and 12 million users in 190 countries. Notable names include American Express, BMW, Honda, Ikea, Pepsi, and Toyota.
Experts believe the supply chain attack, which was maliciously sideloaded, targets downstream customers by installing popular phone and video conferencing software that has been digitally authenticated and modified.
Known Details
Cybersecurity vendors have identified an active supply chain attack on the 3CX Desktop App, a voice and video conferencing software used by millions. SentinelOne researchers are tracking the malicious activity under the name SmoothOperator, which began as early as February 2022, with the attack possibly commencing around March 22, 2023.
The trojanized 3CX desktop app serves as the first stage of a multi-stage attack chain that pulls ICO files appended with Base64 data from GitHub, ultimately leading to a third-stage infostealer DLL. The attack affects the Windows Electron client (versions 18.12.407 and 18.12.416) and macOS versions of the PBX phone system.
The final payload is an information stealer capable of gathering system information and sensitive data stored in Google Chrome, Microsoft Edge, Brave, and Mozilla Firefox browsers. The macOS sample carries a valid signature and is notarized by Apple, allowing it to run without the operating system blocking it.
Huntress reported 242,519 publicly exposed 3CX phone management systems. Symantec said the information gathered could allow attackers to gauge if the victim was a candidate for further compromise. CrowdStrike attributed the attack with high confidence to North Korean nation-state actor Labyrinth Chollima (aka Nickel Academy), a sub-cluster within the Lazarus Group.
3CX CEO Nick Galea stated the company is working on a new build and advises customers to uninstall the app and reinstall it or use the PWA client as a workaround. The Android and iOS versions are not affected.
3CX is urgently working to release a software update in response to the SmoothOperator supply chain attack that targets millions of users. The affected 3CX Desktop App is popular for voice and video conferencing, with over 600,000 customers and 12 million users worldwide, including American Express, BMW, Honda, Ikea, Pepsi, and Toyota.
The attack exploits the DLL side-loading technique, and telemetry data reveals the attacks are limited to Windows Electron (versions 18.12.407 and 18.12.416) and macOS versions of the PBX phone system. The GitHub repository hosting the malicious files has been taken down.
The final payload can steal sensitive data from popular browsers, including Google Chrome, Microsoft Edge, Brave, and Mozilla Firefox. CrowdStrike has attributed the attack to a North Korean nation-state actor known as Labyrinth Chollima, a sub-cluster within the Lazarus Group.
As a temporary solution, 3CX has urged customers to uninstall and reinstall the affected app or use the PWA client while the company works on a new build. Android and iOS versions remain unaffected. Further updates on the situation will be provided as new information emerges.
Expert Comments
Tyler Farrar, CISO, Exabeam
“Any adversary, regardless of whether it is a novice or the work of nation-state actors like the Lazarus Group, is going to go for the path of least resistance to meet their end goal. Weaknesses in the supply chain are one of the simplest, yet most successful, ways to do that. In the case of 3CX, the threat actors were likely not going after the company itself, but the data from its 12 million global customers. Rather than attempt to attack each of the customers individually, the adversaries figured it would be easier to break through 3CX — and they were correct.
Unfortunately, attacks like these are going to become more and more common and I anticipate software supply chain attacks to be the No.1 threat vector of 2023. As a result, I encourage organizations to create a thorough vendor risk management plan to vet third parties and require accountability to remain vigilant, and potentially stop devastating consequences when third-parties are compromised.”
Anand Reservatti, CTO and co-founder, Lineaje
“The 3CX VOIP ‘Trojanizing’ the software supply chain attack is the latest proof point of why companies need to know ‘what’s in their software?’
Companies are still suffering from the fallout of SolarWinds, and now another software supply chain attack is playing out and putting millions of software producers and consumers at risk. The 3CX CEO today asked customers to uninstall the application, but for those who might have missed the notification or who don’t know what’s in their software bill of materials (SBOM) risk destroying their brand and business.
It is critical to understand that not all software is created equal. The 3CX attack was caused when the Electron Windows App got compromised due to an upstream library. It is clear that 3CX has not deployed any tools to accurately discover and manage their software supply chain. So, in order to protect the software supply chain you have to shift to the “left of the shift-left mentality.” Because the software itself is malicious and not straight malware, vulnerability and malware scans fall short as well.
This type of attack is particularly challenging for technologies such as vulnerability and malware scans or CI/CD to detect. You need a solution that can do the following:
1) Discover software components and creating entire genealogy-including all transitive dependencies
2) Establish integrity throughout the supply chain without relying on any external tooling and their assertion
3) Evaluate inherent risk by determining examining each component of the software
4) Remediate inherent risks strategically in order to address the most critical components based on the genealogy
Knowing what’s in your software comes only by knowing what’s in your software supply chain. It’s why it is critical to work with solutions that can attest to the integrity of your software supply chain of all software built and bought. With more details surfacing including possible ties to a nation-state hacking group, it is essential for software producers and consumers to be able to attest to what exactly is in their software to prevent devastating consequences.”
Kayla Underkoffler, Lead Security Technologist, HackerOne
“Cybersecurity professionals already face an uphill battle as defenders; our 2022 Attack Resistance Report found that about one-third of respondents monitor less than 75% of their attack surface, and almost 20% believe that over half of their attack surface is unknown or not observable. The complexity of attack surface monitoring compounds as attackers take the fight to a more granular level by targeting supply chain vulnerabilities.
And unfortunately, that’s exactly what we’re seeing. Malicious actors now strive to embed themselves more deeply within the enterprise tech stack because cybercriminals understand the potential impact of accessing the most sensitive areas of an organization’s network. This can be done through critical dependencies within the software supply chain or a seemingly unchecked corner of the environment.
That’s why it’s critical organizations understand what’s in their environment and how that software interacts with their critical business processes. It’s no longer enough to just document components and dependencies once in the development lifecycle and be done. Today, organizations must proactively consider new solutions to prevent attacks.
An example of tools in use today for active monitoring of software include IBM’s recently developed SBOM Utility and License Scanner: two open-source tools that facilitate and standardize SBOM policies for organizations. These help build a living, breathing inventory of what’s in use in an organization’s current environment so organizations can respond quickly to software supply chain disruptions. Ethical hackers are also proven to be creative resources, skilled at identifying open source and software supply chain vulnerabilities, as well as undiscovered assets that may impact an organization’s software supply chain.”
The post 3CX Desktop App Supply Chain Attack Targets Millions – Known Facts and First Expert Comments appeared first on Cybersecurity Insiders.
By Mike Wilkinson
Mike Tyson famously said, “Everyone has a plan until they get punched in the mouth.” That applies to the world of boxing—and to the world of cyberattacks. Many companies have an Incident Response (IR) plan in place. But those plans don’t always hold up when an actual cyberattack occurs.
At Avertium, we carry out hundreds of IR engagements a year, so I’m highly familiar with what makes IR plans useful—and what doesn’t. Strong IR plans can help eliminate headaches and wasted time and help your organization more effectively respond in what is typically a very stressful situation. Here are six things you need to do to craft an effective incident response plan.
The list of contacts should appear in an appendix at the back of the plan, which makes it simple to consult in the heat of the moment, as well as easy to update. Elsewhere in the document, use generic titles rather than names so that you don’t have to refresh the entire document any time an employee or vendor changes.
Define how information and updates will be shared, to whom, and how often. Set the cadence up front so that expectations can be managed: For instance, a daily update call unless something critical is uncovered that requires action on the part of a larger group.
Keep it as simple and precise as possible: for X type of incident, Y is the response group and their responsibilities, and Z are the steps they take. Consider having a one- to two-page high-level policy that sets out your organization’s principles—the things the business is most concerned with.
These exercises can also be valuable ways of unearthing issues unrelated to the IR plan. For instance, in working through a ransomware scenario your IT team may realize there is sensitive information being stored on a system where it shouldn’t be, or that the data retention time isn’t adequate considering the amount of time that can pass between compromise and detection. It may highlight an opportunity to make a fix or fixes that will actually make you less vulnerable.
Being hit with a cyberattack can be a scary and confusing time; coming up with an IR plan shouldn’t be. If you let the above tips shape your process of creating or updating one, you’ll be in good shape.
Mike Wilkinson leads Avertium’s Cyber Response Unit, which is dedicated to helping clients investigate and recover from IT security incidents on a daily basis. He has been conducting digital investigations since joining Australia’s NSW Police Force, State Electronic Evidence Branch in 2003, where he led a team of civilians in one of the world’s largest digital forensic labs, and has led Incident Response teams in Asia, Europe, and the Americas.
The post 6 Ways to Create an Incident Response Plan That’s Actually Effective appeared first on Cybersecurity Insiders.
The time cost of incident response for security teams may be greater – and more complex – than we’ve been assuming. To see that in action, let’s look at a hypothetical scenario that should feel familiar to most cybersecurity analysts.
A security engineer, Casey, is tuning a SIEM to detect a specific threat that poses an increased risk to their organization. This project has been allotted some set amount of time to get completed. The research and testing that Casey must do in order to get the query and tuning correct, accurate, and effective are essential to the business. This is one of many projects this engineer has on their plate. They are getting into the research and starting to understand the attack at a level they will be able to begin writing some preliminary factors of the alert, and then…
An employee forwards an email that they believe to be phishy. Casey looks at the email and confirms it requires further investigation. However, the engineer must respond to the user by giving them the process to send the email as an attachment to look into headers and other details that could help identify the artifacts of a malicious email. After that, the engineer will do their assessment and respond appropriately to the event.
Now, 25 minutes have passed. Casey returns to focus on tuning the alert but needs to go back over the research a bit more to confirm where they left off. Another 10 minutes have passed, and they are back where they were then the phishing alert came in. Now they are gathering the right information for the project and trying to get the right people involved, then…
An EDR alert comes in. It is from a director’s laptop. This begins to take priority, as the director needs this laptop for their presentation to a customer, and they leave for the airport in 3 hours. Casey steps away to analyze the alert, eradicate the malware, and begin a scan across the organization to determine if the malware hash value is seen elsewhere. 30 minutes go by, because an incident report needs to be added to the ticket. Casey sits back down and, for another 20 minutes, must recalibrate their thoughts to focus on the task at hand.
Scenarios like this are happening in almost every organization today. High-risk security projects are delayed because fires pop up and need to be responded to. In the scenario we’ve just laid out, this engineer has lost one hour and 25 minutes from their project work due to incidents. These incidents may have a risk to them if not dealt with promptly, but the project that this engineer is working on carries a high risk of impact if not completed.
Cal Newport, a computer science professor at Georgetown University, famously explained in his seminal book “Deep Work” that it takes each person a different amount of time to pivot from one task to another. It’s how our brains work. I’m calling that amount of time that it takes to pivot “grey time.” Grey time is not normally added into the time it takes to respond to incidents, but we should change that.
Whether it takes 30 seconds, 5 minutes, or 15 minutes to respond to an incident, you have to add 5 to 25 minutes of grey time to the process to pivot back to the work previously being performed. The longer the break from the task, the longer it may take to get back into the project fully. Grey time is just as detrimental to an organization as not responding to the incidents. There are quite a few statistics out there that help us quantify distractions and interruptions:
Incidents can be distractions or interruptions. The fact is that some events that security professionals respond to are benign and do not lead to actioning an incident response plan or prevent prioritized work from being completed.
Here is where Security Orchestration, Automation, and Response (SOAR) comes into play. Those manual tasks security professionals are doing that take time away from risk-informed projects to secure the business can be automated. If tasks cannot be automated fully, we can at least automate the process of pivoting from tool to tool. SOAR can eliminate the manual notation in a ticketing system and the documentation of an incident report. It can also reduce time to respond and help eliminate grey time.
In an industry where alert fatigue and employee attrition are pervasive issues, the need is high for SOAR’s extensive automation capabilities. Think about the tasks in your organization that you would automate if you could, because they are taking up more time than necessary. We can do some quick math to find your organization's annual cost of manual response for each of those tasks, including grey time.
I encourage you to do this for each playbook or process you have.
What we haven’t done here is add in the grey time. On average, it takes about 23 minutes and 15 seconds to regain focus on a task after a distraction. So, with that in mind, let's round out this post by quantifying our story from earlier.
Let’s say that Casey, our engineer, takes 30 minutes for each phishing email, and malware compromises take 15 minutes to contain and eradicate. Both incident reports take about 20 minutes. Let’s also say that the organization sees about 16 phishing instances per week (ti) and phishing with the reporting takes 50 minutes. Let’s add in the grey time at 20 minutes to make it 70 minutes (tm).
Using the national average salary of an entry-level incident and intrusion analyst at $88,226, we can break that down to an hourly rate of $42.41. From there, 970.7 (ty) x 42.41 (hr) = $41,167.39.
That’s just over $41K spent on manual responses to phishing each year. What about the malware? I’ll shorthand it because I believe you get the picture. Let’s say malware incidents happen about 10 times a week.
That’s nearly a full-time employee salary for just two manual processes!
SOAR is becoming increasingly needed within our information security programs. Not only are we wasting time on manual processes that could be automated, but we are adding grey time to our workday and decreasing the time we have to work on high-priority projects that are informed by business risk and necessary to protect revenue and business operations. With SOAR, you can refocus your efforts on risk-relevant tasks and limit manual task interruptions. You can also reduce grey time and increase the effectiveness of your security program. With SOAR, it’s all blue skies – and no grey time.
Additional reading:
A growing number of regulations require organizations to report significant cybersecurity incidents. We've created a chart that summarizes 11 proposed and current cyber incident reporting regulations and breaks down their common elements, such as who must report, what cyber incidents must be reported, the deadline for reporting, and more.
This chart is intended as an educational tool to enhance the security community’s awareness of upcoming public policy actions, and provide a big picture look at how the incident reporting regulatory environment is unfolding. Please note, this chart is not comprehensive (there are even more incident reporting regulations out there!) and is only current as of August 8, 2022. Many of the regulations are subject to change.
This summary is for educational purposes only and nothing in this summary is intended as, or constitutes, legal advice.
Peter Woolverton led the research and initial drafting of this chart.
Additional reading:
The SEC recently proposed a regulation to require all public companies to report cybersecurity incidents within four days of determining that the incident is material. While Rapid7 generally supports the proposed rule, we are concerned that the rule requires companies to publicly disclose a cyber incident before the incident has been contained or mitigated. This post explains why this is a problem and suggests a solution that still enables the SEC to drive companies toward disclosure.
(Terminology note: “Public companies” refers to companies that have stock traded on public US exchanges, and “material” means information that “there is a substantial likelihood that a reasonable shareholder would consider it important.” “Containment” aims to prevent a cyber incident from spreading. Containment is part of “mitigation,” which includes actions to reduce the severity of an event or the likelihood of a vulnerability being exploited, though may fall short of full remediation.)
In sum: The public disclosure of material cybersecurity incidents prior to containment or mitigation may cause greater harm to investors than a delay in public disclosure. We propose that the SEC provide an exemption to the proposed reporting requirements, enabling a company to delay public disclosure of an uncontained or unmitigated incident if certain conditions are met. Additionally, we explain why we believe other proposed solutions may not meet the SEC’s goals of transparency and avoidance of harm to investors.
The purpose of the SEC’s proposed rule is to help enable investors to make informed investment decisions. This is a reflection of the growing importance of cybersecurity to corporate governance, risk assessment, and other key factors that stockholders weigh when investing. With the exception of reporting unmitigated incidents, Rapid7 largely supports this perspective.
The SEC’s proposed rule would (among other things) require companies to disclose material cyber incidents on Form 8-K, which are publicly available via the EDGAR system. Crucially, the SEC’s proposed rule makes no distinction between public disclosure of incidents that are contained or mitigated and incidents that are not yet contained or mitigated. While the public-by-default nature of the disclosure creates new problems, it also aligns with the SEC’s purpose in proposing the rule.
In contrast to the SEC’s proposed rule, the purpose of most other incident reporting regulations is to strengthen cybersecurity – a top global policy priority. As such, most other cyber incident reporting regulators (such as CISA, NERC, FDIC, Fed. Reserve, OCC, NYDFS, etc.) do not typically make incident reports public in a way that identifies the affected organization. In fact, some regulations (such as CIRCIA and the 2021 TSA pipeline security directive) classify company incident reports as sensitive information exempt from FOIA.
Beyond regulations, established cyber incident response protocol is to avoid tipping off an attacker until the incident is contained and the risk of further damage has been mitigated. See, for example, CISA’s Incident Response Playbook (especially sections on opsec) and NIST’s Computer Security Incident Handling Guide (especially Section 2.3.4). For similar reasons, it is commonly the goal of coordinated vulnerability disclosure practices to avoid, when possible, public disclosure of a vulnerability until the vulnerability has been mitigated. See, for example, the CERT Guide to Coordinated Disclosure.
While it may be reasonable to require disclosure of a contained or mitigated incident within four days of determining its materiality, a strict requirement for public disclosure of an unmitigated or ongoing incident is likely to expose companies and investors to additional danger. Investors are not the only group that may act on a cyber incident report, and such information may be misused.
Cybercriminals often aim to embed themselves in corporate networks without the company knowing. Maintaining a low profile lets attackers steal data over time, quietly moving laterally across networks, steadily gaining greater access – sometimes over a period of years. But when the cover is blown and the company knows about its attacker? Forget secrecy, it’s smash and grab time.
Public disclosure of an unmitigated or uncontained cyber incident will likely lead to attacker behaviors that cause additional harm to investors. Note that such acts would be in reaction to the public disclosure of an unmitigated incident, and not a natural result of the original attack. For example:
In addition, requiring public disclosure of uncontained or unmitigated cyber incidents may result in mispricing the stock of the affected company. By contradicting best practices for cyber incident response and inviting new attacks, the premature public disclosure of an uncontained or unmitigated incident may provide investors with an inaccurate measure of the company’s true ability to respond to cybersecurity incidents. Moreover, a premature disclosure during the incident response process may result in investors receiving inaccurate information about the scope or impact of the incident.
Rapid7 is not opposed to public disclosure of unmitigated vulnerabilities or incidents in all circumstances, and our security researchers publicly disclose vulnerabilities when necessary. However, public disclosure of unmitigated vulnerabilities typically occurs after failure to mitigate (such as due to inability to engage the affected organization), or when users should take defensive measures before mitigation because ongoing exploitation of the vulnerability “in the wild” is actively harming users. By contrast, the SEC’s proposed rule would rely on a public disclosure requirement on a restrictive timeline in nearly all cases, creating the risk of additional harm to investors that can outweigh the benefits of public disclosure.
Below, we suggest a solution that we believe achieves the SEC’s ultimate goal of investor protection by requiring timely disclosure of cyber incidents and simultaneously avoiding the unnecessary additional harm to investors that may result with premature public disclosure.
Specifically, we suggest that the proposed rule remains largely the same — i.e., the SEC continues to require that companies determine whether the incident is material as soon as practicable after discovery of the cyber incident, and file a report on Form 8-K four days after the materiality determination under normal circumstances. However, we propose that the rule be revised to also provide companies with a temporary exemption from public disclosure if each of the below conditions are met:
The determination of the applicability of the aforementioned exception may be made simultaneously to the determination of materiality. If the exception applies, the company may delay public disclosure until such time that any of the conditions are no longer occurring, at which point, they must publicly disclose the cyber incident via Form 8-K, no later than four days after the date on which the exemption is no longer applicable. The 8-K disclosure could note that, prior to filing the 8-K, the company relied on the exemption from disclosure. Existing insider trading restrictions would, of course, continue to apply during the public disclosure delay.
If an open-ended delay in public disclosure for containment or mitigation is unacceptable to the SEC, then we suggest that the exemption only be available for 30 days after the determination of materiality. In our experience, the vast majority of incidents can be contained and mitigated within that time frame. However, cybersecurity incidents can vary greatly, and there may nonetheless be rare outliers where the mitigation process exceeds 30 days.
Rapid7 is aware of other solutions being floated to address the problem of public disclosure of unmitigated cyber incidents. However, these carry drawbacks that do not align with the purpose of the SEC rule or potentially don’t make sense for cybersecurity. For example:
The SEC has an extensive list of material information that it requires companies to disclose publicly on 8-Ks – everything from bankruptcies to mine safety. However, public disclosure of any of these other items is not likely to prompt new criminal actions that bring additional harm to investors. Public disclosure of unmitigated cyber incidents poses unique risks compared with other disclosures and should be considered in that light.
The SEC has long been among the most forward-looking regulators on cybersecurity issues. We thank them for the acknowledgement of the significance of cybersecurity to corporate management, and for taking the time to listen to feedback from the community. Rapid7’s feedback is that we agree on the usefulness of disclosure of material cybersecurity incidents, but we encourage SEC to ensure its public reporting requirement avoids undermining its own goals and providing more opportunities for attackers.
Additional reading:
GitOps is arguably the hottest trend in software development today. It is a new work model that is widely adopted due to its simplicity and the strong benefits it provides for development pipelines in terms of resilience, predictability, and auditability. Another important aspect of GitOps is that it makes security easier, especially in complex cloud […]… Read More
The post What Is GitOps and How Will it Impact Digital Forensics? appeared first on The State of Security.