Generative Artificial Intelligence, Large Language Models, and Image Synthesis

Should we assume that altruism is usually ineffective?

On the other hand why are we measuring altruism like it’s public policy?

1 Like
7 Likes

Akamai is still in business???

They used to charge so much for their services

5 Likes

Someone paid for a dramatic reading of the “I Will Fucking Piledrive You If You Mention AI Again” essay.

https://mccue.dev/pages/7-5-24-a-reading-i-will-fucking-piledrive-you-if-you-mention-ai-again

5 Likes

The voice reading the essay out loud matches tone of essay

He sounds like an angry Gilbert Gottfried or similar

1 Like

It’s good, but I was hoping for something more like this:

Michael Shannon Reads the Insane Delta Gamma Sorority Letter (NSFW) https://www.youtube.com/watch?v=dngOH9G4UPw

2 Likes
4 Likes

However, the exec couldn’t shake the feeling that something fishy was going on, so they told the scammer that they needed to verify the caller’s identity. To do this, they asked what book Vigna recently recommended to them. The title was “Decalogue of Complexity: Acting, Learning and Adapting in the Incessant Becoming of the World” by Alberto Felice De Toni. Of course, the scammer didn’t know that—so they promptly hung up.

It reads as if it was generated by a chatbot for the purpose of promoting the book

2 Likes

An informative talk about how LLMs open new security holes:

Speaker: Shai Alon | Director of AI Innovation at Orca Security

Code from the demos on GitHub: https://github.com/shaialon/ai-securi…

Timecodes
0:00 Intro: Shai Alon takes the stage to discuss the hidden dangers of AI.
2:04 The AI Revolution: A look at the rapid adoption of AI across industries and the exciting possibilities it offers.
5:38 The Security Nightmare: Exposing the vulnerabilities of AI that most people ignore.
8:13 Demo 1: Headset Support Center Refund Bot - Witnessing the dangers of LLM01: Prompt Injection in action.
14:00 LLM03: Training Data Poisoning - How malicious data can poison AI models and lead to unexpected outcomes.
17:08 Demo 2: AI Data Scientist - Exploring the power and peril of agentic AI apps.
19:07 LLM02: Insecure Output Handling - AI’s ability to write and execute code creates a massive attack surface.
23:27 Authorization Bypass: AI agents can be tricked into granting access to sensitive data.
25:14 SQL Injection and Remote Code Execution: Witnessing the devastating impact of exploiting AI code generation.
29:49 Mitigation Strategies: Practical steps to protect your AI applications from these emerging threats.
32:51 OWASP LLM Top 10: A deep dive into the most critical vulnerabilities facing AI applications.
34:53 AI Tooling, education and engineering
38:22 The Role of AI Firewalls: Evaluating the strengths and limitations of perimeter defenses.
39:49 Jailbreaking AI Models: Bypass AI model guardrails & examining the shared responsibility model between AI providers and developers.
41:54 The Unprepared Cybersecurity Industry: A call to action for developers and security professionals.
42:29 Q&A: Shai Alon answers questions from the audience, addressing practical concerns about the combination of human expertise, AI regulation

3 Likes

“Moderators can use Perspective to quickly prioritize and review comments that have been reported and give feedback to commenters who post toxic comments,” Jigsaw brags about the features of its censorship software.

And then, as far as Google is concerned, democracy in Europe is in need of finding some “resilience.” To achieve it, Jigsaw will support “an open call for proposals with Google.org to help scale social impact initiatives promoting democratic resilience in Europe.”

1 Like

https://www.courtlistener.com/docket/69129096/united-states-v-smith/

6 Likes

I do not understand the nature of the crime. If the streaming services decided to pay SMITH in excess of the payments they were collecting for the streams provided by SMITH, it appears to be a problem of their pricing model, not SMITH’s malice. It appears that SMITH simply took advantage of the arbitrage opportunity. Am I missing anything?

3 Likes

https://www.pcmag.com/news/amazon-limits-authors-to-self-publishing-3-books-per-day-amid-flood-of

5 Likes

This AI chatbot got conspiracy theorists to question their convictions

Large-language-model trial suggests facts and evidence really can change people’s minds.

https://www.nature.com/articles/d41586-024-02966-6

2 Likes
2 Likes

12 years ago, a very funny talk:

1 Like