AI-Powered Robots Can Be Tricked Into Acts of Violence
by Will Knight - Wired
As LLMs are given control over real-life systems, those systems will only be as secure as the model that controls them.
Visit → December 2, 2024 This design reiterates a limitation of LLMs: they lack the understanding and context to effectively follow granular rules about responses to avoid. The only way to completely avoid unwanted responses is blunt, unnuanced filters.
Visit →
November 20, 2024 [Experts] caution that everyone, including those living in a state without restrictions, may want to keep their health care decisions private.
Visit →
November 16, 2024 When you’re optimizing for efficiency, you’re getting rid of redundancies. But when patients’ lives are at stake, you actually want redundancy. You want extra slack in the system. You want multiple sets of eyes on a patient in a hospital.
Visit →
November 9, 2024 For anyone feeling like you don’t know what to do, I urge you to think hard about what matters most to you, and look for ways to fight for those things — particularly if you have specific skills that you can put to use. Are you a good writer? Tech savvy? A compelling leader? Good at coming up with new ideas? Find things that play to your strengths.
Visit →
October 26, 2024 As technology educators, coaches, and administrators, we have a disproportionate say in the platforms that are brought into schools. We need to understand what our colleagues need, what these platforms can do, and whether these align.
Visit →
October 24, 2024 Many research-backed edtech tools only report on the success of “students who used the program as recommended”. What about the majority who don’t?
Visit →
September 19, 2024 The biggest innovation here isn’t what Generative AI does, or can do, but rather the creation of an ecosystem that’s hopelessly dependent upon a handful of hyperscalers, and has no prospect of ever shaking its dependence.
Visit →
September 15, 2024 As algorithmic decision-making plays an increasingly central role in our lives, the ability to defer or appeal to human review diminishes for all but a few.
Visit →
July 24, 2024 Steps to prevent scraping lock out anyone who can’t afford the same deals as monopolies. This is collateral damage from AI giants deciding they’re entitled to all user-generated content.
Visit →
July 10, 2024 The finance world is catching onto the gap between the hype and reality of generative AI.
Visit →
June 27, 2024 We should remain vigilant when it comes to protecting our privacy and security. But we can only do so much when companies break norms and best practices.
Visit →
June 24, 2024 The last-mile delivery problem is not met by chat bots.
Visit →
June 20, 2024 Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods.
Visit →
June 20, 2024 Open AI’s hiring of a former NSA director may signal a shift to surveillance as a service.
Visit →
April 18, 2024 [AI tools] do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial.
Visit →
December 23, 2023 I can convince myself to stick with anything if I tell myself it’s only a week-long experiment.
Visit →
December 19, 2023 All the big, exciting uses for AI are either low-dollar (helping kids cheat on their homework, generating stock art for bottom-feeding publications) or high-stakes and fault-intolerant (self-driving cars, radiology, hiring, etc.).
Visit →