Articles tagged with: #ai Clear filter
Why LLMs Need Real Security

Why LLMs Need Real Security

cybersecurity www.reddit.com

In my second blog post, I revert back to the basics of LLM and a class of security problems that get introduced by using non-deterministic LLMs in deterministic systems. If you have any feedback, please feel free to reach out! https://artoodavid.substack.com/p/when-ai-starts-acting-why-llms-need submitted by /u/donkeybutt123 [link] [comments]

#ai
OpenAI ChatGPT Atlas Browse Jailbroken to Disguise Malicious Prompt as URLs

OpenAI ChatGPT Atlas Browse Jailbroken to Disguise Malicious Prompt as URLs

Cyber Security News cybersecuritynews.com

OpenAI's newly launched ChatGPT Atlas browser, designed to blend AI assistance with web navigation, faces a serious security flaw that allows attackers to jailbreak the system by disguising malicious prompts as harmless URLs. This vulnerability exploits the browser's omnibox, a combined address and search bar that interprets inputs as either navigation commands or natural-language prompts

AI-Powered Ransomware Is the Emerging Threat That Could Bring Down Your Organization

AI-Powered Ransomware Is the Emerging Threat That Could Bring Down Your Organization

Cyber Security News cybersecuritynews.com

The cybersecurity landscape has entered an unprecedented era of sophistication with the emergence of AI-powered ransomware attacks. Recent research from MIT Sloan and Safe Security reveals a shocking statistic: 80% of ransomware attacks now utilize artificial intelligence. This represents a fundamental shift from traditional malware operations to autonomous, adaptive threats that can evolve in real-time

AI+Cybersecurity Certification

AI+Cybersecurity Certification

cybersecurity www.reddit.com

Hi all, I'm looking into a new path for my cybersecurity career and was wondering if anyone here has explored AI Security certifications or learning paths. I'm almost done with my OSCP, and I've mostly been focused on offensive security so far (labs, Hack The Box, homelab work, etc.). Recently though, I've been really interested in how AI and cybersecurity intersect. I'm still junior in the field (less than 2 years of experience), so I don't qualify yet for management-level certs like AAISM or...

Your Org, Your Tools: Building a Custom MCP Catalog

Your Org, Your Tools: Building a Custom MCP Catalog

Docker www.docker.com

I'm Mike Coleman, a staff solutions architect at Docker. In this role, I spend a lot of time talking to enterprise customers about AI adoption. One thing I hear over and over again is that these companies want to ensure appropriate guardrails are in place when it comes to deploying AI tooling. For instance, many

Hidden Danger in the New ChatGPT Atlas Browser

Hidden Danger in the New ChatGPT Atlas Browser

cybersecurity www.reddit.com

So there is discussion about prompt injection attacks being worse inside the browser because of malicious sites , I just find it horrible to use right now . Any one else given it a test drive and have concerns from a cyber perspective ? submitted by /u/Red_One_101 [link] [comments]

#ai
NVIDIA GTC DC: Live Updates on What's Next in AI

NVIDIA GTC DC: Live Updates on What's Next in AI

NVIDIA Blog blogs.nvidia.com

Countdown to GTC DC: What to Watch Next Week 🔗 Next week, Washington, D.C., becomes the center of gravity for artificial intelligence. NVIDIA GTC Washington, D.C., lands at the Walter E. Washington Convention Center Oct. 27-29 - and for those who care about where computing is headed, this is the moment to pay attention. The Read Article

#ai
ChatGPT Atlas Stores OAuth Tokens Unencrypted Leads to Unauthorized Access to User Accounts

ChatGPT Atlas Stores OAuth Tokens Unencrypted Leads to Unauthorized Access to User Accounts

Cyber Security News cybersecuritynews.com

A significant vulnerability in OpenAI's newly released ChatGPT Atlas browser reveals that it stores unencrypted OAuth tokens in a SQLite database with overly permissive file settings on macOS, potentially allowing unauthorized access to user accounts. This flaw, discovered by Pete Johnson just days after the browser's October 21, 2025, launch, bypasses standard encryption practices used

AI Guide to the Galaxy: MCP Toolkit and Gateway, Explained

AI Guide to the Galaxy: MCP Toolkit and Gateway, Explained

Docker www.docker.com

This is an abridged version of the interview we had in AI Guide to the Galaxy, where host Oleg Šelajev spoke with Jim Clark, Principal Software Engineer at Docker, to unpack Docker's MCP Toolkit and MCP Gateway. TL;DR What they are: The MCP Toolkit helps you discover, run, and manage MCP servers; the MCP Gateway

Operant AI Reveals "Shadow Escape": First Zero-Click Agentic Attack

Operant AI Reveals "Shadow Escape": First Zero-Click Agentic Attack

Cyber Security - AI-Tech Park ai-techpark.com

Operant AI, the world's only Runtime AI Defense Platform, today disclosed the discovery of Shadow Escape, a powerful zero-click attack that exploits Model Context Protocol (MCP) and connected AI agents. The exploit enables data exfiltration via popular AI agents and assistants, including ChatGPT, Claude, Gemini, and other LLM-powered agents. As enterprises rapidly adopt agentic AI through...

Found a Jail Break for ChatGPT

Found a Jail Break for ChatGPT

For [Blue|Purple] Teams in Cyber Defence www.reddit.com

I recently found a jailbreak for ChatGPT via the character[.]ai platform i was able to get info on how to make some illicit substance and some dark web platform to sell on. I want to know how can i report this jailbreak and might get some recognition for doing so. submitted by /u/pontodes [link] [comments]

PhantomLint: Principled Detection of Hidden LLM Prompts in Structured Documents

PhantomLint: Principled Detection of Hidden LLM Prompts in Structured Documents

cs.CR updates on arXiv.org arxiv.org

arXiv:2508.17884v2 Announce Type: replace Abstract: Hidden LLM prompts have appeared in online documents with increasing frequency. Their goal is to trigger indirect prompt injection attacks while remaining undetected from human oversight, to manipulate LLM-powered automated document processing systems, against applications as diverse as r\'esum\'e screeners through to academic peer review processes. Detecting hidden LLM prompts is therefore important for ensuring trust in AI-assisted human...

Black Box Absorption: LLMs Undermining Innovative Ideas

Black Box Absorption: LLMs Undermining Innovative Ideas

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.20612v1 Announce Type: cross Abstract: Large Language Models are increasingly adopted as critical tools for accelerating innovation. This paper identifies and formalizes a systemic risk inherent in this paradigm: \textbf{Black Box Absorption}. We define this as the process by which the opaque internal architectures of LLM platforms, often operated by large-scale service providers, can internalize, generalize, and repurpose novel concepts contributed by users during interaction. This...

#ai
Adversary-Aware Private Inference over Wireless Channels

Adversary-Aware Private Inference over Wireless Channels

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.20518v1 Announce Type: cross Abstract: AI-based sensing at wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications, particularly for vision and perception tasks such as in autonomous driving and environmental monitoring. AI systems rely both on efficient model learning and inference. In the inference phase, features extracted from sensing data are utilized for prediction tasks (e.g., classification or regression). In edge networks,...

#ai
RAGRank: Using PageRank to Counter Poisoning in CTI LLM Pipelines

RAGRank: Using PageRank to Counter Poisoning in CTI LLM Pipelines

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.20768v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) has emerged as the dominant architectural pattern to operationalize Large Language Model (LLM) usage in Cyber Threat Intelligence (CTI) systems. However, this design is susceptible to poisoning attacks, and previously proposed defenses can fail for CTI contexts as cyber threat information is often completely new for emerging attacks, and sophisticated threat actors can mimic legitimate formats, terminology, and...

#ai
SecureInfer: Heterogeneous TEE-GPU Architecture for Privacy-Critical Tensors for Large Language Model Deployment

SecureInfer: Heterogeneous TEE-GPU Architecture for Privacy-Critical Tensors for Large Language Model Deployment

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.19979v1 Announce Type: new Abstract: With the increasing deployment of Large Language Models (LLMs) on mobile and edge platforms, securing them against model extraction attacks has become a pressing concern. However, protecting model privacy without sacrificing the performance benefits of untrusted AI accelerators, such as GPUs, presents a challenging trade-off. In this paper, we initiate the study of high-performance execution on LLMs and present SecureInfer, a hybrid framework that...

Model Context Contracts - MCP-Enabled Framework to Integrate LLMs With Blockchain Smart Contracts

Model Context Contracts - MCP-Enabled Framework to Integrate LLMs With Blockchain Smart Contracts

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.19856v1 Announce Type: new Abstract: In recent years, blockchain has experienced widespread adoption across various industries, becoming integral to numerous enterprise applications. Concurrently, the rise of generative AI and LLMs has transformed human-computer interactions, offering advanced capabilities in understanding and generating human-like text. The introduction of the MCP has further enhanced AI integration by standardizing communication between AI systems and external data...