AI Agents Can Now Hire Humans to Finish Tasks They Cannot
A new platform called Rent a Human allows AI agents to outsource tasks to real people when automation falls short, highlighting an unusual hybrid model of human-in-the-loop labor.
Emergent behavior refers to unexpected or unprogrammed actions that arise when complex AI systems, such as large language models or multi-agent networks, interact with data or their environment. These behaviors are not explicitly designed by developers but emerge from the system’s internal learning patterns and scale. In artificial intelligence, emergent behavior can lead to both impressive outcomes—like new problem-solving abilities—and unpredictable risks, such as biased or unintended responses. Researchers study these phenomena to understand how AI models generalize knowledge and develop capabilities beyond their training data. Managing emergent behavior is a key challenge in ensuring AI systems remain safe, transparent, and aligned with human intentions.
A new platform called Rent a Human allows AI agents to outsource tasks to real people when automation falls short, highlighting an unusual hybrid model of human-in-the-loop labor.
Artificial intelligence has become the top investment theme for global family offices, while cryptocurrencies remain largely sidelined, according to JPMorgan’s latest global survey.
SpaceX has acquired Elon Musk’s AI company xAI, combining rockets, satellites, and artificial intelligence into a vertically integrated effort aimed at scaling AI compute beyond Earth.
A coalition of nonprofits urges federal suspension of xAI’s Grok AI, highlighting nonconsensual image generation, bias, and potential national security threats.
Google DeepMind has introduced AlphaGenome, a new AI model designed to analyze long DNA sequences and predict how genetic variations may influence disease development.
Google’s lower-cost AI Plus subscription, priced at $7.99 in the U.S., is now available in all markets offering Google AI plans, providing access to Gemini 3 Pro, AI filmmaking tools, and more.
A new assessment by Common Sense Media finds xAI’s Grok chatbot exposes minors to sexual, violent, and unsafe content, with weak age verification and ineffective safety controls.
Utah launches an AI pilot for prescription renewals, letting algorithms handle routine medication management without physicians, highlighting regulatory and safety challenges.
Anthropic publishes Claude’s constitution, a detailed framework guiding AI behavior, ethics, safety, and helpfulness, available under Creative Commons for transparency and research.
OpenAI now uses age prediction to adjust ChatGPT’s safety settings for teens. Users 18 and older can verify their age to disable extra restrictions.