Westlake Robotics has unveiled its Titan 01 humanoid robot, powered by the General Action Expert (GAE) model, enabling real-time imitation of human movements. Demonstrated in Hangzhou, an operator's gestures were replicated within milliseconds, showcasing high synchronization. The GAE model enhances learning by allowing robots to adapt quickly through observation, replacing traditional programming. Its “shadow function” allows for seamless interaction, paving the way for advanced applications in manufacturing and healthcare.
Anthropic is testing a feature called "Orbit" for its AI assistant Claude, enabling it to autonomously make calls, send messages, and manage tasks on smartphones. This innovation was revealed by user M1Astra via code screenshots. Currently, Claude can control computers but requires additional tools for phones. Once Orbit is integrated, it could seamlessly operate across devices, enhancing automation and task efficiency for users, marking a significant step in AI capabilities.
Nvidia CEO Jensen Huang claims we've reached a form of artificial general intelligence (AGI), specifically aligned with Lex Fridman's definition of AI capable of running a billion-dollar company. Huang cites OpenClaw as a potential example of this AGI, although he notes such projects may not have lasting success. Additionally, he emphasizes that while AI can demonstrate success, replicating Nvidia's achievements remains impossible. Industry efforts toward AGI are ongoing, with significant breakthroughs needed in the coming years.
A pivotal trial in New Mexico is set to conclude with closing arguments as jurors deliberate on whether Meta, the parent company of Facebook, Instagram, and WhatsApp, misled users about child safety on its platforms. Prosecutors allege the company violated the state’s Unfair Practices Act by prioritizing profits and creating a "breeding ground" for predators. If found liable, Meta could face fines up to $5,000 per violation, potentially totaling billions due to widespread user engagement in the state.
ChatGPT has rapidly transformed from a novelty to an essential tool, impacting industries from education to healthcare. Experts caution that while it enhances efficiency, it also poses risks, including "validation bias" in medical inquiries and potential "cognitive debt," weakening critical thinking skills. The "AI Safety Manifesto" advises careful usage parameters, particularly in sensitive fields. As dependency on AI grows, we must navigate its benefits against the risks of mental atrophy and isolation.
A recent study by Ada reveals that consumers favor “always-on” AI customer service, provided it effectively resolves their issues. The research indicates that 72% of consumers prefer immediate assistance without agent wait times. However, satisfaction drops to 30% when AI fails to solve queries. Ada's CEO, Mike Barlow, states, “The key is not just availability but also the ability to close the loop on customer inquiries.” This underscores the importance of AI efficiency in customer support.
Artificial intelligence is transforming IT service delivery, with a shift towards AI-first models and outcome-based contracts, according to industry experts. Vikash Jain of BCG notes that proper implementation of AI tools in enterprises requires substantial effort beyond technology. KPMG's Akhilesh Tuteja highlights that AI enhances productivity but is not a plug-and-play solution. As enterprises increasingly seek measurable results, the IT services sector must adapt to deliver significant outcomes.
OpenAI has introduced the 'Library' feature for ChatGPT, enabling users to store personal files and images on its cloud storage. Available to Plus, Pro, and Business users globally—except in the UK and the European Economic Area—this feature automatically saves uploaded files securely for future reference. OpenAI states, "ChatGPT automatically saves uploaded and created files," which remain until deleted manually. Files are purged from servers within 30 days post-deletion, likely for legal compliance.
Russia's new super-app, Max, is raising alarms due to its unencrypted nature and mandatory use among citizens. The app consolidates multiple services, including messaging and banking, under one platform but lacks robust privacy measures. Critics argue it poses significant risks to user data, with cybersecurity expert Ivan Petrov stating, “Users should be aware of the potential surveillance implications.” The Kremlin’s push for Max reflects a broader trend towards digital control in the region.
Pinterest CEO Bill Ready advocates for a global ban on social media for children under 16, citing safety and mental health concerns. "Social media as it exists today is not safe for kids under 16," Ready stated on LinkedIn, urging for improved regulations and more accountability from tech companies. His call comes as several countries, including Australia, France, and India, take steps to restrict social media access to protect young users, underscoring a growing need for better parental controls.