Rapid7 Infuses Generative AI into the InsightPlatform to Supercharge SecOps and Augment MDR Services

In the ever-evolving landscape of cybersecurity, staying ahead of threats is not just a goal—it's a necessity. At Rapid7, we are pioneering the infusion of artificial intelligence (AI) into our platform and service offerings, transforming the way security operations centers (SOCs) around the globe operate. We’ve been utilizing AI in our technologies for decades, establishing patented models to better and more efficiently solve customer challenges. Furthering this endeavor, we’re excited to announce we’ve extended the Rapid7 AI Engine to include new Generative AI capabilities being used by our internal SOC teams, transforming the way we deliver our MDR services.

A Thoughtful, Deliberate Approach to AI Model Deployment

At Rapid7, one of our core philosophical beliefs is that vendors - like ourselves - should not lean on customers to tune our models. This belief is showcased by our approach to deploying AI models, with a process that entails initially releasing them to our internal SOC teams to be trained and battle-tested before being released to customers via in-product experiences.

Another core pillar of our AI development principles is that human supervision is essential and can’t be completely removed from the process. We believe wholeheartedly in the efficacy of our models, but the reality is that AI is not immune from making mistakes. At Rapid7, we have the advantage of working in lockstep with one of the world's leading SOC teams. With a continuous feedback loop in place between our frontline analysts and our AI and data science team, we’re constantly fine-tuning our models, and MDR customers benefit from knowing our teams are validating any AI-generated output for accuracy.

Intelligent Threat Detection and Continuous Alert Triage Validation

The first line of defense in any cybersecurity strategy is the ability to detect threats accurately and efficiently. The Rapid7 AI Engine leverages the massive volume of high-fidelity risk and threat data to enhance alert triage by accurately distinguishing between malicious and benign alerts, ensuring analysts can focus on only the alerts that are truly malicious. The engine has also been extended to include a combination of both traditional machine learning (ML) and Generative AI models to ensure new security alerts are accurately labeled as malicious or benign. This work boosts the signal to noise ratio, thereby enabling Rapid7 analysts to spend more time investigating the security signals that matter to our customers.

Introducing Our AI-Powered SOC Assistant

Generative AI is not just a tool; it's a game-changer for SOC efficiency. Our AI-native SOC assistant empowers MDR analysts to quickly respond to security threats and proactively mitigate risks on behalf of our customers. Because we fundamentally believe AI should be trained by the knowledge of our teams and vetted processes, our SOC assistant utilizes our vast internal knowledge bases. Sources like the Rapid7 MDR Handbook - a resource amassed over decades of experience cultivated by our elite SOC team - enable the assistant to guide analysts through complex investigations and streamline response workflows, keeping our analysts a step ahead.

Rapid7 is further using generative AI to carefully automate the drafting of security reports for SOC analysts, typically a manual and time-intensive process. With more than 11,000 customers globally, the Rapid7 SOC triages a huge volume of activity each month, with summaries that are critical for keeping customers fully updated on what’s happening in their environment and actions performed on their behalf. While AI is a key tool to streamline report building and delivery, every report that is generated by the Rapid7 AI Engine is augmented and enhanced by our SOC teams, making certain every data point is accurate and actionable. Beyond providing expert guidance, the AI assistant also has the ability to automatically generate incident reports once investigations are closed out, streamlining the process and ensuring we can communicate updates with customers in a timely manner.

An Enabler for Secure AI/ML Application Development

We know we’re not alone in developing Generative AI solutions, and as such we’re also focused on delivering capabilities that allow our customers to implement and adhere to AI/ML development best practices. We continue to expand our support for Generative AI services from major cloud service providers (CSPs), including AWS Bedrock, Azure OpenAI service and GCP Vertex. These services can be continuously audited against best practices outlined in the Rapid7 AI/ML Security Best Practices compliance pack, which includes the mitigations outlined in the OWASP Top 10 for ML and large language models (LLMs). Our continuous auditing process, enriched by InsightCloudSec’s Layered Context, offers a comprehensive view of AI-related cloud risks, ensuring that our customers' AI-powered assets are secure.

Rapid7 Infuses Generative AI into the InsightPlatform to Supercharge SecOps and Augment MDR Services

The Future of MDR Services is Powered by AI

The integration of Generative AI into the Insight Platform is not just about helping our teams keep pace - it's about setting the pace. With unparalleled scalability and adaptability, Rapid7 is committed to maintaining a competitive edge in the market, particularly as it relates to leveraging AI to transform security operations. Our focus on operational efficiencies, cost reduction, and improved quality of service is unwavering. We're not just responding to the changing threat landscape – we're reshaping it.

The future of MDR services is here, and it's powered by the Rapid7 AI Engine.

AI Trust Risk and Security Management: Why Tackle Them Now?

Co-authored by Sabeen Malik and Laura Ellis

In the evolving world of artificial intelligence (AI), keeping our customers secure and maintaining their trust is our top priority. As AI technologies integrate more deeply into our daily operations and services, they bring a set of unique challenges that demand a robust management strategy:

  1. The Black Box Dilemma: AI models pose significant challenges in terms of transparency and predictability. This opaque nature can complicate efforts to diagnose and rectify issues, making predictability and reliability hard to achieve.
  2. Model Fragility: AI's performance is closely tied to the data it processes. Over time, subtle changes in data input—known as data drift—can degrade an AI system’s accuracy, necessitating constant monitoring and adjustments.
  3. Easy Access, Big Responsibility: The democratization of AI through cloud services means that powerful AI tools are just a few clicks away for developers. This ease of access underscores the need for rigorous security measures to prevent misuse and effectively manage vulnerabilities.
  4. Staying Ahead of the Curve: With AI regulation still in its formative stages, proactive development of self-regulatory frameworks like ours helps inform our future AI regulatory compliance frameworks; but most importantly, it builds trust among our customers. When thinking about AI’s promises and challenges, we know that trust is earned. But that trust is also is of concern for global policymakers, and that is why we are looking forward to engaging with NIST on discussions related to the AI Risk Management, Cyber Security, and Privacy frameworks. It’s  also why we were an inaugural signer of the CISA Secure by Design Pledge to demonstrate to government stakeholders and customers our commitment to building things and understanding the stakes at large.

Our TRiSM (Trust, Risk, and Security Management) framework isn’t merely a component of our operations—it’s a foundational strategy that guides us in navigating the intricate landscape of AI with confidence and security.

How We Approach AI Security at Rapid7

Rapid7 leverages the best available technology to protect our customers' attack surfaces. Our mission drives us to keep abreast of the latest AI advancements to deliver optimal value to customers while effectively managing the inherent risks of the technology.

Innovation and scientific excellence are key aspects of our AI strategy. We strive for continuous improvement, leveraging the latest technological innovations and scientific research. By engaging with thought leaders and adopting best practices, we aim to stay at the forefront of AI technology, ensuring our solutions are not only effective but also pioneering and thoughtful.

Our AI principles center on transparency, fairness, safety, security, privacy, and accountability. These principles are not just guidelines; they are integral to how we build, deploy, and manage our AI systems. Accountability is a cornerstone of our strategy, and we hold ourselves responsible for the proper functioning of our AI systems so we can ensure they respect and embody our principles throughout their lifecycle. This includes ongoing oversight, regular audits, and adjustments as needed based on feedback and evolving standards.

We have leveraged a number of AI risk management frameworks to inform our approach.  Most notably, we have adopted the NIST AI Risk Management Framework and the Open Standard for Responsible AI. These frameworks help us comprehensively assess and manage AI risks, from the early stages of development through deployment and ongoing use. The NIST framework provides a thorough methodology for lifecycle risk management, while the Open Standard offers practical tools for evaluation and ensures that our AI systems are user-centric and responsible.

We are committed to ensuring that our AI deployments are not only technologically advanced but also adhere to the highest standards of security and ethical responsibility.

AI Integration in Action: Making It Work Day-to-Day

We take a practical approach to adhere to our AI TRiSM framework by integrating it into the daily operations of our existing technologies and processes, ensuring that AI enhances rather than complicates our security posture:

  1. Clear Rules: We have developed and implemented detailed enterprise-wide policies and operational procedures that govern the deployment and use of AI technologies. These guidelines ensure consistency and compliance across all departments and initiatives.
  2. Transparency Matters: We leverage our own tooling to gain visibility into our cloud security posture for AI.  We leverage InsightCloudSec solutions to provide comprehensive visibility into our AI deployments across various environments. This visibility is crucial for our security strategy, encapsulated by the philosophy, "You can’t protect what you can’t see." It allows us to monitor, evaluate, and adjust our AI resources proactively.
  3. Throughout the Development Lifecycle: We integrate rigorous AI evaluations at every phase of our software development lifecycle. From the initial development stages to production and through regular post-deployment assessments, our framework ensures that AI systems are safe, effective, and aligned with our ethical standards.
  4. Smart Governance: By embedding AI-specific governance protocols into our existing code and cloud configuration management systems, we maintain strict control over all AI-related activities. This integration ensures that our AI initiatives comply with established best practices and regulatory requirements.
  5. Empowering Our Team: We recognize the critical need for advanced AI skills in today’s tech landscape. To address this, we offer training programs and collaborative opportunities, which not only foster innovation but also ensure adherence to best practices. This approach empowers our teams to innovate confidently within a secure and supportive environment.

Integrating AI into our core processes enhances our operational security and underscores our commitment to ethical innovation. At Rapid7, we are dedicated to leading responsibly in the AI space, ensuring that our technological advancements positively contribute to our customers, company, and society.

Our AI TRiSM framework is not merely a set of policies—it's a proactive, strategic approach to securely and ethically harnessing new technologies. As we continue to innovate and push the boundaries of what’s possible with AI, we stay focused on setting a high bar for standards of responsible and secure AI usage, ensuring that our customers always receive the best technology solutions. Learn more here.

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Digital transformation has created immense opportunity to generate new revenue streams, better engage with customers and drive operational efficiency. A decades-long transition to cloud as the de-facto delivery model of choice has delivered undeniable value to the business landscape. But any change in operating model brings new challenges too.

The speed, scale and complexity of modern IT environments results in security teams being tasked with analyzing mountains of data to keep pace with the ever-expanding threat landscape. This dynamic puts security analysts on their heels, constantly reacting to incoming threat signals from tools that weren’t purpose-built to solve hybrid environments, creating coverage gaps and a need to swivel-chair between a multitude of point solutions. Making matters worse? Attackers have increasingly looked to weaponize AI technologies to launch sophisticated attacks, benefiting from increased scale, easy access to AI-generated malware packages, as well as more effective social engineering and phishing using generative AI.

To combat these challenges, we need to equip our security teams with modern solutions leveraging AI to cut through the noise and boost signals that matter.  Our AI algorithms alleviate alert and action fatigue by delivering visibility across your IT environment and intelligently prioritizing the most important risk signals.

Rapid7 Has Been an AI Innovator for Decades

There has been a groundswell of development and corresponding interest in generative AI, particularly in the last few years as mainstream adoption of large language models (LLMs) grows. Most notably OpenAI’s ChatGPT - has brought AI to the forefront of people’s minds. This buzz has resulted in a number of vendors in the security space launching their own intelligent assistants and working to incorporate AI/ML into their respective solutions to keep pace.

From our perspective, this is great news and a huge step forward in the data & AI space. Rapid7 is also accelerating investment, but we’re certainly not starting from scratch. In fact, Rapid7 was a pioneer in AI development for security use cases, starting all the way back to our earliest days with our VM Expert System in the early 2000’s.

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Built on decades of risk analysis and continuously trained by our expert SOC team, Rapid7 AI enables your team to focus on what matters most by proactively shrinking your attack surface and intelligently re-balancing the scales between signal and noise.

With visibility across your hybrid attack surface, the Insight Platform enables proactive prevention, leveraging a proprietary AI-based detection engine to spot threats faster than ever and automatically prioritize the signals that matter most based on likelihood of exploitation and potential business impact. Based on learnings from your own environment and security operations over time, the platform will intelligently recommend updates to detection rule settings in an effort to reduce excess noise and eliminate false positives.

By integrating our AI capabilities into the Rapid7 platform, customers benefit from:

  • World class threat efficacy, with AI-driven detection of anomalous activity.  With a vast amount of legitimate activity occurring across customer environments, our AI algorithms validate if activity is actually malicious, allowing teams to spot unknown threats faster than ever.
  • Help to cut through the noise, by identifying the signals that matter most.  Our AI algorithms  automatically prioritize risks and threats, intelligently, suppressing benign alerts to eliminate the noise so analysts can focus on what matters most.
  • The confidence that they’re taking action on AI-generated insights they can trust, built on decades of risk and threat analysis and trained by a team of recognized innovators in AI-driven security.

Recent Innovations in AI-driven Threat Detection

We’ve recently announced two new AI/ML-powered threat detection capabilities aimed at enabling teams to detect unknown threats across a customer’s  environment faster than ever before without introducing excess noise.

  • Cloud Anomaly Detection
    Cloud Anomaly Detection is an AI-powered, agentless detection capability designed to detect and prioritize anomalous activity within an organization’s own cloud environment. The proprietary AI engine goes beyond simply detecting suspicious behavior; it automatically suppresses benign signals to reduce noise, eliminate false positives, and enable teams to focus on investigating highly-probable active threats. For more information on Cloud Anomaly Detection, check out the launch blog here.
  • Intelligent Kerberoasting Detection
    We’ve expanded existing AI-driven detections for attack types such as data exfiltration, phishing and credential theft to include intelligent detection, validation and prioritization of Kerberoasting attacks. The platform goes beyond traditional tactics for detecting Kerberoasting by applying a deep understanding of typical user activity across the organization. With this context, SOC teams can respond with confidence knowing the signals they are receiving are actually indicative of a Kerberoasting attack.

Rapid7 continues to explore and invest in ways we can leverage AI/ML to better-equip our customers to defend their organizations against the ever-expanding threat landscape. Keep an eye out in the near-future for additional innovations to come out in this space.

For now, be sure to stop by the Rapid7 booth (#1270) at AWS Re:Invent, where we’ll be showcasing Cloud Anomaly Detection and talk to us about how your team is thinking about utilizing AI.