Tuesday, 30 January 2024

Defining the Security Landscape of Large Language Models (LLMs) in the New Age of Cyber Threats

In an era of rapid technological advancements, the rise of Large Language Models (LLMs) has introduced unparalleled capabilities, yet it also opens new avenues for malicious activities. This cutting-edge technology, while proving its real-world applications, is not immune to exploitation. As LLMs become integral to direct customer interactions, the need for robust security measures becomes paramount.


What is LLM?

A Large Language Model (LLM) in Aritificial Intelligence is a type of Natual Language Processing program which can be trained to recognise & understand the existent content and generate accurate content with contextual relevance.




OWASP Top 10 for LLM Applications

Every three to four years this open community (with over 30k+ volunteers doing security assessment and research) compiles and releases a list of top 10 severe vulnerabilities that organisations can keep on priority lookout. It also provides tools, methodologies and guidelines on latest technologies

To address the vulnerabilities specific to LLM applications, it has compiled the OWASP Top 10 for LLM Applications. This comprehensive guide outlines the top threats and vulnerabilities associated with LLMs, offering detailed explanations, common examples, attack scenarios, and prevention mechanisms.

For detailed report refer here - OWASP Top 10 for LLM - 2023


Key Threats Unveiled:

LLM01: Prompt Injections 

Prompt Injection Vulnerabilities in LLMs involve crafty inputs leading to undetected manipulations. The impact ranges from data exposure to unauthorized actions, serving attacker's goals goal




LLM02: Insecure Output Handling 
These occur when plugins or apps accept LLM output without scrutiny, potentially leading to XSS, CSRF, SSRF, privilege escalation, remote code execution, and can enable agent hijacking attacks. 



LLM03: Training Data Poisoning 

LLMs learn from diverse text but risk training data poisoning, leading to user misinformation. Overreliance on AI is a concern. Key data sources include Common Crawl, WebText, OpenWebText, and books. 



LLM04: Denial of Service 

An attacker interacts with an LLM in a way that is particularly resource-consuming, causing quality of service to degrade for them and other users, or for high resource costs to be incurred. 




LLM05: Supply Chain 


LLM supply chains risk integrity due to vulnerabilities leading to biases, security breaches, or system failures. Issues arise from pre-trained models, crowdsourced data, and plugin extensions. 





LLM06: Permission Issues 

Lack of authorization tracking between plugins can enable indirect prompt injection or malicious plugin usage, leading to privilege escalation, confidentiality loss, and potential remote code execution. 



LLM07: Data Leakage 


Data leakage in LLMs can expose sensitive information or proprietary details, leading to privacy and security breaches. Proper data sanitization, and clear terms of use are crucial for prevention. 





LLM08: Excessive Agency 

When LLMs interface with other systems, unrestricted agency may lead to undesirable operations and actions. Like web-apps, LLMs should not self-police; controls must be embedded in APIs. 



LLM09: Overreliance 


Overreliance on LLMs can lead to misinformation or inappropriate content due to "hallucinations." Without proper oversight, this can result in legal issues and reputational damage. 





LLM10: Insecure Plugins 

Plugins connecting LLMs to external resources can be exploited if they accept free-form text inputs, enabling malicious requests that could lead to undesired behaviors or remote code execution. 




The simple guideline to build Secure GenAI Applications on Cloud hosting is to follow
a defense-in-depth approach for building secure GenAI resources, emphasizing governance, identification, protection, detection, response, and recovery. Analysts, Architects, CISOs, and developers are encouraged to explore their cloud services for secure GenAI application development.


In this dynamic landscape, the message is clear: keep building, but build securely. Understanding and mitigating these threats is crucial for harnessing the full potential of LLMs without compromising security.

Friday, 26 January 2024

Threat Modelling

What is Threat Modelling?

A threat modelling process can help you understand your organization's security posture. Typically encompasses a process of Asset identification, Threat intelligence, Risk assessment, Attack mapping and Mitigation capabilities. Over the years there are many threat models developed for threat identifitcaion, impact assessment,  

Examples of Threat Model frameworks:  

STRIDE

DREAD

PASTA

NIST 800-54??

OCTAVE

LINDDUN??


Threat Mitigation: 

Here are some mitigation suggestions for threat modeling: 

Mitigate: Take action to reduce the likelihood of a threat. For example, you can add checks or controls that reduce the risk impact.

Eliminate: Remove the feature or component that is causing the threat.

Transfer: Shift responsibility to another entity such as the customer.

Accept: Decide that the business impact is acceptable.


Part 1 - Application Description - Capture the application description as elaborate as possible with key focus on highlighting factors on these:-

Rationale

Main Applicability/Functionality

Proprietary/Open Source

Why it is developed?

How will it be used?

Who will be using it?

What Purpose it will serve or outcome of it?


Part 2 - User Interactive Questions that will focus on capturing inputs as part of the simple drop down, interactive queries to help tool generate a tailored model for the user specific requirements.

Simple Baseline information, 

High Level Risk Profile 

Business Impact inputs 


Part 3 - Generate a comprehensive result - 

Threat model output provides more relevant hypothetical scenarios and testing framework to improve the cyber security and trust in the defined business application.

Attack tree output provides a graphical diagram that outlines the logic of an attack. It aims to show the flow of how a malicious user might exploit the IT Asset/System from the perspective of a successful attack. Helps realise the risk impact and probability with the probable logical flow diagram.

Mitigation suggestions provide the options to help address the risks identified as an outcome of the threat model evaluation. The mitigation suggestions can further be implemented to mitigate, eliminate, transfer or accept the risk. 


Saturday, 20 January 2024

What's in the new SEC Rules - December 2023!!

The Securities and Exchange Commission (SEC) requires public companies to report data breaches and hacks within four business days of discovery. Companies must disclose cyber security incidents on a Form 8-K filing. 

The SEC also requires companies to disclose annual information about their Strategy, Governance and Risk Management. SEC directs companies to use the definition of materiality from securities law and it states that information is considered material if a reasonable investor would attach importance like in making an investment decision. 

The SEC's new rules are intended to help clarify the expectations around breach disclosure guidelines and its timelines. It helps to improve Cyber Security Incident disclosure, document Governance, Risk Management and Compliance. It empowers consumers to act quickly and build greater trust in businesses and also protect investors. 

  • New SEC rules effective in December 2023 require publicly-traded U.S. organisations to disclose material cybersecurity incidents and address management of cybersecurity risks annually.
  • The rules aim to enhance breach-related disclosures, requiring a Form 8-K report within four days of determining the materiality of an incident, detailing its nature, scope, timing, and material impact.
  • Organizations are not obligated to provide excessive technical details but must prioritise improved crisis communications for determining incident materiality without disclosing confidential Information.
  • These new rules must alert the organisations that do not have an incident response plan or reviewed it regularly.
  • Organizations can request a delay in reporting incidents to the SEC if the disclosure presents a significant risk to national security or public safety reasons, consulting the technical teams and referring to the guidelines of Department of Justice.
  • Engaging with CyberSecurity & Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) during such incidents will not trigger the four day rule and also aids business continuity, recovery and provides insights.
Compliance with SEC rules aligns with best practices, potentially making organisations less susceptible to cyber-incidents and more attractive to investors. Similar to SEC, the new upcoming Cyber Incident Reporting for Critical Infrastructure Act (CIRCA) will have a deadline of 72 hours for reporting the cyber security incidents impacting critical infrastructure. New SEC reporting complements other U.S. incident response regulations, emphasising the importance of taking security maturity and risk management seriously.

Saturday, 6 January 2024

Futuristic Data Recovery Process

 With huge cloud based adoptions not only the availability of data increases but the attack source also will increase thereby increasing the opportunities of data breach. 

The recovery plan must be updated to be 

- Real time Recovery

- Enhanced data protection and encryption mechanisms to delay the compromise. 

- Artificial Intelligence based data recovery through prediction models 

- Entrenched records with use of secure technologies like block chain 

- New approach to leverage edge computing technology to implement a distributed recovery system that would reduce impact and also losses. 

- Implement complex recovery tasks that would add to effective recovery plans.

Wednesday, 3 January 2024

Future Threats

 

Future threats are likely to shut down the internet with probable rogue AI algorithms, lack of regulations, frameworks and governing structures to the proliferation of the intelligent technologies. With the journey of seamless availability by adopting to global options like cloud, countries might have to build digital walls and perimeters to protect themselves from digital breakdown and economic disruption. Privacy of data will evidently be irrelevant as the advancing AI’s success is reliant on gathering humongous PII and human intelligence.

EchoLeak Vulnerability Exposes Microsoft 365 Copilot to Zero-Click Data Theft

🚨 Critical Alert: A wake-up call for AI security in enterprise environments Microsoft has just patched a critical vulnerability that shoul...