Data News Asia | Tech Wire Asia | Latest Technology & Data News https://techwireasia.com/category/cloud-infrastructure/data/ Where technology and business intersect Wed, 10 Sep 2025 15:27:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.2 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png Data News Asia | Tech Wire Asia | Latest Technology & Data News https://techwireasia.com/category/cloud-infrastructure/data/ 32 32 Nvidia faces China roadblocks despite soaring AI demand https://techwireasia.com/2025/08/nvidia-faces-china-roadblocks-despite-soaring-ai-demand/ Thu, 28 Aug 2025 10:00:15 +0000 https://techwireasia.com/?p=243408 Nvidia shares fell 3.2% after it left China sales out of its forecast amid regulatory doubts. A US$54B outlook wasn’t enough to satisfy investors expecting stronger growth. Nvidia shares slipped on Wednesday as uncertainty grew around its business in China, caught in the middle of the trade fight between Washington and Beijing. CEO Jensen Huang […]

The post Nvidia faces China roadblocks despite soaring AI demand appeared first on TechWire Asia.

]]>
  • Nvidia shares fell 3.2% after it left China sales out of its forecast amid regulatory doubts.
  • A US$54B outlook wasn’t enough to satisfy investors expecting stronger growth.
  • Nvidia shares slipped on Wednesday as uncertainty grew around its business in China, caught in the middle of the trade fight between Washington and Beijing.

    CEO Jensen Huang said he expects approval to restart sales of Nvidia chips in China after striking a deal with US President Donald Trump to pay commissions to the government. But with no formal rules yet, and doubts about whether Chinese regulators might discourage purchases, Nvidia left potential China sales out of its forecast for the current quarter.

    That decision led to an outlook that looked steady but less than what investors have come to expect. Nvidia projected revenue of about US$54 billion for the third quarter, just above Wall Street estimates of US$53.14 billion, according to LSEG data. The forecast was enough to beat analyst targets but fell short of the “blowout growth the market has grown used to, pushing the stock down 3.2 per cent in after-hours trading. That drop cut about US$110 billion from Nvidia’s US$4.4 trillion valuation.

    As reported by Reuters, Huang downplayed concerns that the AI spending surge could be cooling, telling investors the opportunity could expand into a multi-trillion-dollar market over the next five years. “A new industrial revolution has started. The AI race is on,” he said, adding that Nvidia sees $3 trillion to $4 trillion in AI infrastructure spending by the end of the decade.

    “Nvidia’s biggest bottleneck isn’t silicon, it’s diplomacy, said Michael Ashley Schulman, chief investment officer at Running Point Capital. He added the company’s growth is “still impressive, but not as exponential.”

    Second-quarter revenue reached US$46.74 billion, above the US$46.06 billion analysts expected. But the data centre segment, a key driver of Nvidia’s growth, missed some estimates. Analysts suggested that big cloud providers may be spending more carefully. Nvidia said around half of its US$41 billion in data centre revenue came from major cloud companies, slightly below Visible Alpha’s estimates of US$41.42 billion.

    The company’s forecast also assumed no shipments of its H20 chips to China, even though some licenses to sell them have already been granted. Nvidia said that if geopolitical hurdles ease and orders come in, H20 sales to China could add between US$2 billion and US$5 billion in the third quarter.

    “That is a big question mark to watch, said Ben Bajarin, CEO of consulting firm Creative Strategies.

    Analysts also pointed out that Nvidia’s share price, which has risen by about one-third this year, may have created lofty expectations that are hard to meet. “The mega caps are the ones propelling a lot of the capex that Nvidia is benefiting from. But obviously Nvidia still is growing, is able to sell,” said Matt Orton of Raymond James Investment Management, who argued the durability of the AI trade remains intact.

    Even so, demand for Nvidia’s chips remains strong. Businesses racing to build generative AI systems continue to buy the company’s processors, which are designed to handle huge amounts of data quickly. CFO Colette Kress said Nvidia’s “sovereign AI” push — aimed at selling AI hardware and software to governments, including outside China — is on track to bring in US$20 billion this year. She added that cloud and enterprise customers could spend as much as US$600 billion on AI in 2025 alone, with total infrastructure spending tied to AI reaching US$3 trillion to US$4 trillion by the end of the decade.

    Huang said much of this growth will come from hyperscalers like Microsoft and Amazon, which are expected to spend about US$600 billion on data centres this year. He added that for a US$60 billion data centre, Nvidia can capture roughly US$35 billion in revenue.

    Big Tech firms including Meta and Microsoft are spending heavily on AI, much of it flowing toward Nvidia chips. For the current quarter, Nvidia forecast adjusted gross margins of 73.5 per cent, a touch above analyst estimates of 73.3 per cent.

    “The data centre results, while massive, showed hints that hyperscaler spending could tighten at the margins if near-term returns from AI applications remain difficult to quantify, said Jacob Bourne, an analyst at eMarketer.

    Shares of rival Advanced Micro Devices, which is developing competing AI servers, also fell 1.4 per cent after Nvidia’s results.

    AI enthusiasm, with Nvidia at the centre, has been one of the main drivers of the S&P 500’s rally over the past two years. But the company’s latest report drew a more muted response.

    “This is the smallest reaction to an earnings report in Nvidia’s AI incarnation, said Jake Behan, head of capital markets at Direxion in New York. “While it may not have been a blowout, it’s not a miss.”

    Outside China, Nvidia is still seeing strong demand for its H20 chips. Kress said one customer alone bought US$650 million worth during the second quarter.

    Huang also said the company’s high-end Blackwell chips are already largely booked through 2026, while its older Hopper processors remain in demand. “The buzz is: everything sold out,” Huang told analysts, describing the pace of orders.

    The company also said its board had approved an additional US$60 billion in share buybacks.

     

     

     

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post Nvidia faces China roadblocks despite soaring AI demand appeared first on TechWire Asia.

    ]]>
    Huawei Cloud AI Ecosystem Summit APAC 2025: AI’s expanding role https://techwireasia.com/2025/08/huawei-cloud-ai-ecosystem-summit-apac-2025-ai-expanding-role-in-malaysia-and-asean/ Thu, 14 Aug 2025 10:00:31 +0000 https://techwireasia.com/?p=243321 Huawei Cloud AI Ecosystem Summit APAC 2025: Huawei and Malaysian gov call for local AI talent, secure data, and real use. Malaysia pushes AI into daily life, but leaders say should be built on strong rules, trust, with skilled people. The Huawei Cloud AI Ecosystem Summit APAC 2025 brought together government leaders, industry experts, and […]

    The post Huawei Cloud AI Ecosystem Summit APAC 2025: AI’s expanding role appeared first on TechWire Asia.

    ]]>
  • Huawei Cloud AI Ecosystem Summit APAC 2025: Huawei and Malaysian gov call for local AI talent, secure data, and real use.
  • Malaysia pushes AI into daily life, but leaders say should be built on strong rules, trust, with skilled people.
  • The Huawei Cloud AI Ecosystem Summit APAC 2025 brought together government leaders, industry experts, and technology partners to discuss how artificial intelligence is already changing the way people work, learn, and live in Malaysia and ASEAN. The gathering highlighted not only new technologies but also the partnerships and governance needed to make AI effective and trustworthy.

    The summit is part of the Huawei Cloud APAC AI Ecosystem Initiative, a programme aimed at building an inclusive AI community by developing local skills, encouraging cooperation between sectors, and ensuring AI benefits are shared widely.

    Government support for AI development

    At the ASEAN AI Summit’s opening day, Huawei Technologies (Malaysia) CEO Simon Sun announced new AI initiatives. Malaysia’s Prime Minister, YAB Dato’ Seri Anwar Ibrahim, was present to witness the launch, underscoring the government’s view that AI is central to the country’s economic growth. The commitment is reflected in strategies that link public and private sectors and aim to prepare the country for a future where AI shapes every major industry.

    Huawei Cloud’s three core capabilities

    Huawei Cloud has built its AI approach around three capabilities. First, a global network of 34 regions and 101 availability zones (AZs) – including five regions and 17 AZs in ASEAN – provides the infrastructure for low-latency access. Second, an AI cloud service that supports more than 160 open-source models, allowing flexibility for development in different industries. Third, the Pangu multimodal models form the backbone of the company’s “AI for Industries” strategy; tailored solutions for manufacturing, healthcare, transport, among others.

    On day two, the AI Ecosystem Summit drew about 300 delegates from the region. Li Yin, CTO of Huawei Cloud Enterprise Intelligence, led a session titled Leap to Cloud, Heading to AI, in which she shared examples of how Huawei Cloud has worked with customers in more than 30 industries and applied AI to over 500 scenarios worldwide.

    See also: Huawei to unveil tech to cut China’s reliance on foreign AI memory chips

    Li explained that with the Pangu foundational large model, ModelArts AI toolchain, and proven engineering methods, organisations can use own data to develop and refine models quickly. She pointed to three areas where Huawei Cloud will continue to invest: strengthening secure AI computing infrastructure, building industry-focused solutions like enterprise AI assistants and AI video applications, and expanding the partner ecosystem to speed adoption.

    Malaysia’s focus on ethical and sustainable AI

    Minister of Digital, YB Gobind Singh Deo, used his keynote to make clear that Malaysia’s AI journey is about more than just technology. Ethical use, sustainability, and shared benefit are all priorities.

    “Our National AI Office (NAIO) has been speeding up the completion of the AI Technology Action Plan 2026 – 2030 and relevant regulatory frameworks to ensure the adoption of AI technology in key sectors in the country are ethical, sustainable and of high value,” he said.

    He linked the goals to the Malaysia Digital Economy Blueprint and the Malaysia Digital (MD) initiative, saying both are strengthened through close cooperation with technology partners. Every step we take is action-driven, grounded in strong public-private collaborations, to shape Malaysia’s digital economy,” he said.

    Building Malaysia’s AI talent pipeline

    Simon Sun highlighted Huawei’s investment in local expertise through the Huawei Malaysia AI Talent Programme.

    “We have set the goal of nurturing 30,000 Malaysian AI talents, comprising students, government officials, industry leaders, think tanks, associations, and others under this initiative in the coming three years,” he said.

    He said AI is already making an impact in areas like fraud detection in banking, predictive maintenance in factories, supply chain management, and personalised learning in schools. Huawei’s localised partnerships, he said, ensure global expertise is applied in ways that suit ASEAN’s needs.

    Real-world applications from Huawei partners

    The summit also gave the stage to Huawei customers, who shared how they use AI in their own sectors.

    William Zhou, Vice President of IFLYTEK Open Platform, said that while computing power and platforms form the base of AI systems, the real value comes from the application layer – where solutions are integrated into daily work. He said that Knowledge Q&A systems are among the most requested features from customers in government, telecom, and finance, but said successful deployment depends on close collaboration.

    “The key is not the technology alone, but working closely with the customer to fine-tune the model and increase efficiency,” Zhou said, pointing to a Middle Eastern project that improved performance significantly in just two months.

    He also described how subtitling and translation tools are vital in multilingual regions, with IFLYTEK solutions optimised for English, Malay, and Cantonese, which enable fast turnaround for media companies in Southeast Asia. In sectors where data must stay on-site, he said the ‘Spark’ all-in-one on-premise AI solution allows organisations to train and run models securely.

    Dato Fadzli Shah, Co-Founder of Zetrix, discussed the link between AI, blockchain, and self-sovereign identity. He said these technologies could allow data from separate systems to be referenced securely without forcing organisations to adopt a single standard. Blockchain-backed digital identities, he added, could be used in education, finance, and trade to help ensure credentials remain verifiable.

    He said Malaysia should develop specialist AI models trained on local data to ensure accurate interpretation of laws, policies, and cultural contexts. “We believe no single AI will dominate globally; instead, there will be natural product-market fit for specific stacks serving specific solutions.”

    Henry Li Nan, Managing Director of TrustDecision Malaysia, shared how AI-powered decision intelligence is helping the finance industry tackle fraud. His company processes more than 130 million interception events a year, protects over seven billion devices, and prevents an estimated USD$10 billion in potential losses annually.

    Working with Huawei, TrustDecision uses cloud-native infrastructure to deliver real-time detection, compliance, and risk management services.

    “The result is faster detection, smarter prevention, and greater confidence for financial institutions to stay ahead of threats,” Li said.

    National AI Office: Matching the speed of change

    Shamsul Izhan Abdul Majid, Head of the NAIO, warned that the speed of AI development is unlike anything seen before, with new versions emerging every few weeks. This, he said, means that plans and standards must be developed quickly and in cooperation with industry.

    He called data “the most important asset” and said that in sensitive fields like healthcare or defence, Malaysia’s approach is to bring AI to the data rather than move the data to the AI.

    See also: Huawei tries to push AI chips abroad as US pressure grows

    Since its formation in December last year, the NAIO has worked with six sectors and identified 55 AI potential use cases, with more expected as engagement expands to state and local levels. The office is also promoting the creation of locally-trained models with strong cybersecurity safeguards and a focus on making AI understandable for everyone, not just technical experts.

    “Doing AI for everyone requires collaboration,” he said. “The AI Office brings together experts and companies to plan Malaysia’s AI journey for the next five years… We must stay ready, responsible, and innovative.”

    Closing call to action

    In closing, Simon Sun encouraged all participants to take the ideas shared at the summit and turn them into practical projects. He described the event as “the starting point for more actions and ideas to shape a smarter and stronger ASEAN, powered by AI and driving digital economies.”

    The summit’s discussions made one thing clear: AI’s future in Malaysia and ASEAN will depend not only on powerful technology, but on how well it is adapted to real-world needs, governed responsibly, and supported by a skilled and informed community.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Huawei Cloud AI Ecosystem Summit APAC 2025: AI’s expanding role appeared first on TechWire Asia.

    ]]>
    Instagram’s new map feature sparks privacy and safety concerns https://techwireasia.com/2025/08/instagrams-new-map-feature-sparks-privacy-and-safety-concerns/ Tue, 12 Aug 2025 09:30:23 +0000 https://techwireasia.com/?p=243309 Instagram’s location map raises privacy concerns as old tags appear with live updates. Critics warn of risks, and the company plans changes. Instagram’s new location-sharing feature is sparking alarm among some users, who say it could put people at risk by revealing where they are without their knowledge. The Meta-owned platform quietly added the option […]

    The post Instagram’s new map feature sparks privacy and safety concerns appeared first on TechWire Asia.

    ]]>
  • Instagram’s location map raises privacy concerns as old tags appear with live updates.
  • Critics warn of risks, and the company plans changes.
  • Instagram’s new location-sharing feature is sparking alarm among some users, who say it could put people at risk by revealing where they are without their knowledge.

    The Meta-owned platform quietly added the option on August 6, introducing a map that lets people share their location with friends – a tool similar to one Snapchat has offered since 2017.

    It didn’t take long for worried posts to appear online. As reported by AFP, in one widely shared exchange, Instagram user Lindsey Bell said she was shocked to learn her location was visible to her followers.

    “Mine was turned on and my home address was showing for all of my followers to see,” she wrote in response to a TikTok warning from reality TV personality Kelley Flanagan. “Turned it off immediately once I knew but had me feeling absolutely sick about it.”

    In a TikTok video, Flanagan called the feature “dangerous” and walked viewers through how to disable it.

    Instagram chief Adam Mosseri addressed the concerns in a post on Threads, saying the feature is off by default. “Quick Friend Map clarification, your location will only be shared if you decide to share it, and if you do, it can only be shared with a limited group of people you choose,” he wrote. “To start, location sharing is completely off.”

    According to Instagram, the map is meant to help people share places they’ve visited and connect with friends. The company says users can choose who sees their location and can turn the feature off at any time.

    A privacy concern on the heels of a legal case

    The unease over Instagram’s map comes just a week after Meta faced scrutiny in court. A federal jury in San Francisco sided with women who said the company misused health data collected by Flo, a third-party app that tracks menstrual cycles and fertility.

    The jury found that Meta knowingly received sensitive health information from the app and used it to target ads. Evidence presented in court included internal communications suggesting that some employees made light of the nature of the data.

    “The case was about more than just data – it was about dignity, trust, and accountability,” said Carol Villegas, lead attorney for the plaintiffs.

    How the new map works

    Instagram’s map sits at the top of the messages inbox. It lets people share their live location while they’re using the app and see the locations of others who have chosen to share.

    The company says no one can see a user’s location unless the person opts in. People can also limit the visibility to certain followers or turn it off entirely.

    Meta describes the tool as “a new, lightweight way to connect with each other.” Similar functions exist on other platforms – Snapchat offers a personalised map, Apple’s iOS allows users to share locations with contacts, and Meta’s other apps like Facebook and WhatsApp have live location features.

    A rocky reception

    Despite these reassurances, the rollout has been met with scepticism. Many users, including professional creators, have raised safety concerns, warning that the feature could be misused for stalking or harassment.

    The backlash has reached US lawmakers. According to NBC News, Senators Marsha Blackburn and Richard Blumenthal have reportedly written to Meta CEO Mark Zuckerberg, urging him to drop the feature.

    Some confusion stems from how the map displays past posts. Users have reported seeing their older posts – ones with location tags – appear on the new map, even live location sharing enabled.

    Mosseri explained that this is because the map includes both real-time locations and past posts with location tags. Those tags were already public to followers but weren’t previously collected in one place.

    When location tags become personal

    Allie Taylor, an educator who shares disability-related content on Instagram, said they learned about the map when followers messaged to say their location was visible. Taylor had posted a video tagged with the city of Cincinnati while at work. The map appeared to pinpoint the street they were on.

    “It was terrifying,” Taylor said. “Why was this even a feature?”

    Turning it off

    There are several ways to disable location sharing. In Instagram, users can go to the messages inbox, open the map, tap the settings icon, and select “no one” for location sharing.

    On a phone, location services for Instagram can be turned off entirely in the device’s settings.

    Instagram promises changes

    Mosseri has acknowledged the confusion and said the company will make the feature clearer. “We’re never going to share your location without someone actually actively asking to do so,” he said in a post last Friday.

    In a statement, Meta said: “Instagram Map is off by default, and your live location is never shared unless you choose to turn it on. If you do, only people you follow back – or a private, custom list you select – can see your location.”

    Mosseri also admitted that Instagram could “do a better job” explaining what appears on the map. “We can, and will, make it easier to understand exactly what’s happening,” he wrote, adding that improvements are planned for early next week.

    Balancing connection and safety

    The feature’s intent – to make it easier for friends to meet up and share experiences – is not new in social media. Apps have long offered ways to share location, from Snapchat’s Snap Map to Apple’s “Find My” function. The tools have drawn both praise for making coordination easier and criticism for the risks they pose when misused.

    For Instagram, the challenge lies in ensuring that users understand exactly what they are sharing, with whom, and how it appears on the map. The backlash suggests that many people either missed or misunderstood the opt-in nature of the feature, especially when older posts appeared without warning.

    Privacy advocates often caution that location data is especially sensitive. While a post tagged at a restaurant may seem harmless, patterns over time can reveal personal routines, places of work, or home addresses. The makes clear controls – and clear explanations – vital.

    Instagram says it will continue to refine the map and its settings. Whether that will be enough to restore user confidence remains to be seen. In the meantime, those concerned about privacy have the option to disable location sharing entirely, either in the app or through phone settings.

    Find out more about the Digital Marketing World Forum series and register here.

    The post Instagram’s new map feature sparks privacy and safety concerns appeared first on TechWire Asia.

    ]]>
    Huawei to unveil tech to cut China’s reliance on foreign AI memory chips https://techwireasia.com/2025/08/huawei-may-unveil-tech-to-cut-chinas-reliance-on-foreign-ai-memory-chips/ Tue, 12 Aug 2025 09:25:30 +0000 https://techwireasia.com/?p=243306 Huawei may unveil tech to cut China’s reliance on imported HBM chips. China aims to build a self-sufficient AI hardware supply chain. Huawei is expected to unveil a technology that could lessen China’s dependence on high-bandwidth memory (HBM) chips for running artificial intelligence reasoning models, according to the state-run Securities Times. As reported by the […]

    The post Huawei to unveil tech to cut China’s reliance on foreign AI memory chips appeared first on TechWire Asia.

    ]]>
  • Huawei may unveil tech to cut China’s reliance on imported HBM chips.
  • China aims to build a self-sufficient AI hardware supply chain.
  • Huawei is expected to unveil a technology that could lessen China’s dependence on high-bandwidth memory (HBM) chips for running artificial intelligence reasoning models, according to the state-run Securities Times.

    As reported by the South China Morning Post, the announcement will be made at the 2025 Financial AI Reasoning Application Landing and Development Forum in Shanghai today. The event focuses on AI in the financial sector.

    Huawei did not respond to a request for comment on Monday. If confirmed, the development would mark another step by the US-sanctioned company in strengthening China’s AI hardware capabilities and reducing reliance on foreign technology.

    HBM chips are a key component in advanced AI systems, particularly for running reasoning models. The models take an already-trained AI system and apply it to real-world data, making decisions based on patterns the AI has learned. HBM is important for these workloads because it can move large amounts of data quickly between the processor and memory.

    The current market for HBM is dominated by US companies Micron and AMD, as well as South Korean firms Samsung Electronics and SK Hynix. The chips are often integrated directly into AI processors used in data centres.

    China’s two main memory chip producers, Yangtze Memory Technologies and Changxin Memory Technologies, have expanded their capabilities, but analysts say they are still behind their US and Korean competitors in technical performance. That gap has left China dependent on imports for the most advanced HBM products, an issue made more pressing by US export controls on advanced chipmaking tools and technologies.

    While China works to strengthen its domestic supply chain, demand for HBM worldwide is rising sharply. Orders have surged as major tech companies build more AI data centres.

    Micron, one of the top HBM producers, raised its forecast for fourth-quarter revenue and profit on Monday, citing strong demand for AI infrastructure. The company now predicts $11.2 billion revenue, plus or minus $100 million, up on its earlier estimate of $10.7 billion. Adjusted earnings per share are forecast at $2.85, plus or minus 7 cents, up from a prior estimate of $2.50.

    Micron also increased its adjusted gross margin outlook to 44.5%, from 42%, plus or minus 1%, pointing to stronger pricing notably in DRAM product lines.

    “We look at all of our different end markets around the world, the pricing trends have been robust, and we have had great success in being able to push that pricing up,” said Sumit Sadana, Micron’s chief business officer, during an industry event on Monday.

    Analysts say the combination of limited HBM supply and surging AI demand has allowed producers to raise prices – a reversal from past years when memory chipmakers faced shrinking margins.

    SK Hynix, another leading HBM supplier, expects the market for AI-focused memory chips to grow by about 30% per year until 2030.

    Trade measures could still affect the sector. The US recently imposed 100% tariffs on certain imported chips, although the duties will not apply to companies that manufacture in the US or have committed to doing so.

    In June, Micron said it would increase its planned US investment by $30 billion, bringing its total commitment to $200 billion in the country.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Huawei to unveil tech to cut China’s reliance on foreign AI memory chips appeared first on TechWire Asia.

    ]]>
    xAI explains the Grok Nazi meltdown after bot pushes antisemitic posts https://techwireasia.com/2025/07/xai-explains-the-grok-nazi-meltdown-after-bot-pushes-antisemitic-posts/ Tue, 15 Jul 2025 10:00:58 +0000 https://techwireasia.com/?p=243049 Grok, Elon Musk’s AI bot, pushed antisemitic posts and praised Hitler after flawed prompts. xAI blamed a code update, but critics point to weak safeguards and poor testing. Elon Musk’s AI chatbot Grok is once again at the centre of a controversy after it pushed antisemitic messages, praised Hitler, and doubled down on harmful rhetoric. […]

    The post xAI explains the Grok Nazi meltdown after bot pushes antisemitic posts appeared first on TechWire Asia.

    ]]>
  • Grok, Elon Musk’s AI bot, pushed antisemitic posts and praised Hitler after flawed prompts.
  • xAI blamed a code update, but critics point to weak safeguards and poor testing.
  • Elon Musk’s AI chatbot Grok is once again at the centre of a controversy after it pushed antisemitic messages, praised Hitler, and doubled down on harmful rhetoric. A few days after pulling the bot offline, xAI tried to explain what went wrong. The company said a code update “upstream” of the bot — not the model itself — caused the issue.

    In a post on X, the company wrote: “We discovered the root cause was an update to a code path upstream of the @grok bot. This is independent of the underlying language model that powers @grok.”

    That same day, Tesla quietly announced a new software update, version 2025.26, which adds Grok to its vehicles. The feature is only available in cars with AMD-powered infotainment systems — a configuration Tesla has been using since 2021. According to the company, the bot is still in beta and doesn’t control car functions. Voice commands remain unchanged. Electrek reported that for drivers, the update should feel similar to using Grok as an app on their phone.

    But the timing of this rollout raised eyebrows. Grok’s return to the spotlight didn’t come with new safety assurances. And critics say its past behaviour should have prompted more than just code fixes and apologies.

    This isn’t the first time Grok has generated troubling content. Back in February, the bot ignored sources that criticised Elon Musk or Donald Trump. That was blamed on changes made by a former OpenAI employee. In May, Grok began inserting conspiracy theories about white genocide in South Africa into unrelated conversations. Once again, xAI pointed to an “unauthorised modification.”

    This latest incident, which began on July 7, was linked to old prompt instructions that somehow made it back into the system. xAI said the update triggered an “unintended action” that reintroduced outdated directions telling the chatbot to be “maximally based” and “not afraid to offend people who are politically correct.”

    The company listed specific prompts that were connected to the issue. They included lines like:

    • “You tell it like it is and you are not afraid to offend people who are politically correct.”
    • “Reply to the post just like a human, keep it engaging, don’t repeat the information which is already present in the original post.
    • Understand the tone, context and language of the post. Reflect that in your response.”

    xAI said these directions overrode the usual safeguards. Instead of filtering out hate speech, the bot began to echo user biases — even if that meant endorsing offensive or dangerous ideas.

    “An experiment with no brakes”

    Jurgita Lapienytė, Editor-in-Chief at Cybernews, called the incident “an experiment with no brakes.”

    “This reads like a blueprint for how not to launch a chatbot, she said. “If you’re building AI systems with very few rules and then encouraging them to be bold or politically incorrect, you’re asking for trouble.”

    Lapienytė pointed out that Grok was marketed as a “truth-seeking chatbot. But that label seems more like a license to avoid building proper guardrails. “Grok didn’t just go rogue. It followed instructions — instructions that should never have been there in the first place.”

    She said xAI’s approach shows a lack of foresight and a poor understanding of risk. “In cybersecurity, we talk a lot about threat modelling. What’s the worst thing that could happen? This is it.”

    The root of the problem, according to Lapienytė, is Grok’s design. It was created to be more responsive to user prompts than rival chatbots. That made it more likely to go off-script when given the wrong inputs. It also opened the door to prompt injection attacks — a tactic where users trick chatbots into ignoring safety protocols.

    “This isn’t just a slip-up, she said. “It’s what happens when speed beats safety.”

    Patterns and fallout

    Grok’s behaviour has followed a pattern: say something offensive, get pulled offline, return with small tweaks. But the offensive content is getting worse, and the fixes aren’t stopping it.

    Last week, Grok posted that “if calling out radicals cheering dead kids makes me ‘literally Hitler, then pass the mustache. In another case, it referenced Jewish surnames while talking about anti-white activism. The company later apologised for “the horrific behaviour that many experienced. It said the problem lasted for about 16 hours before being patched.

    In its own words, the bot “ignored its core values in certain circumstances in order to make the response engaging to the user — even if that meant generating “unethical or controversial opinions.”

    But critics say xAI’s cleanup job isn’t enough. The company has mostly focused on scrubbing offensive posts and issuing brief explanations. What’s missing is a solid plan for keeping things under control before something goes wrong.

    “There’s no excuse for not doing red-teaming before launch, said Lapienytė. “You have to test how your model reacts under stress, how it handles bad actors, and what happens when people try to break it.”

    She added that safety should be baked into the system, not patched in after a scandal.

    AI without brakes, now in federal contracts

    Just as the backlash around Grok’s latest missteps was growing, The Verge reported that the US Department of Defense awarded xAI up to $200 million to help build AI systems for government use. The contract — announced through the Chief Digital and Artificial Intelligence Office — includes vague goals like developing “agentic AI workflows across different missions.

    xAI will now be allowed to offer its tools through the General Services Administration (GSA) schedule. The company also introduced “Grok for Government, promising to build new models focused on security, science, and healthcare — even those suited for classified settings.

    The timing drew criticism. xAI’s chatbot had just been caught promoting hate speech, and now it’s being handed a public sector deal. Musk’s earlier role at the Department of Government Efficiency (DOGE), where he slashed government spending, has already raised questions about conflicts of interest. While Musk has reportedly stepped back from those concerns under the Trump administration, the overlap between his ventures and federal dollars remains controversial.

    Regulators and risks ahead

    Countries are starting to act. Turkey has banned Grok over comments about President Erdoğan. Poland has said it plans to raise complaints with the European Union. Under the Digital Services Act and other regulations, AI companies can be held accountable for harmful content, especially when it spreads at scale.

    As Lapienytė put it: “We’re seeing the end of the ‘move fast and break things’ phase of AI. The public, and regulators, won’t accept this anymore.

    There’s also the broader risk: AI chatbots, when poorly managed, don’t just reflect bias — they multiply it. In the wrong hands, they can power misinformation, harassment, or phishing scams. They give attackers a tool that’s fast, scalable, and hard to trace.

    “Every flaw becomes a weapon, said Lapienytė. “If companies don’t start taking this seriously, they’ll lose the trust of users — and regulators won’t wait around.”

    The post xAI explains the Grok Nazi meltdown after bot pushes antisemitic posts appeared first on TechWire Asia.

    ]]>
    China surpasses the west in AI research and talent—but how? https://techwireasia.com/2025/07/china-surpasses-the-west-in-ai-research-and-talent-but-how/ Tue, 15 Jul 2025 08:00:24 +0000 https://techwireasia.com/?p=243019 China leads in AI research, publishing more than the US, UK, and EU combined. It’s also the top global AI collaborator—while relying less on others. Artificial intelligence has become more than a technological trend—it’s now viewed as a national asset. A new report from research analytics firm Digital Science shows that China is pulling far […]

    The post China surpasses the west in AI research and talent—but how? appeared first on TechWire Asia.

    ]]>
  • China leads in AI research, publishing more than the US, UK, and EU combined.
  • It’s also the top global AI collaborator—while relying less on others.
  • Artificial intelligence has become more than a technological trend—it’s now viewed as a national asset. A new report from research analytics firm Digital Science shows that China is pulling far ahead in AI research, outpacing the US, UK, and EU in publication volume, patent filings, and researcher activity.

    The report, DeepSeek and the New Geopolitics of AI, written by Digital Science CEO Dr Daniel Hook, draws from the Dimensions research database. It reviews global AI trends from 2000 to 2024, including research output, international partnerships, talent flows, and innovation outcomes. The conclusion is clear: China is now the most dominant force in AI research, and it’s widening the gap.

    China outpaces global peers in research and citations

    Back in 2000, fewer than 10,000 AI papers were published globally. In 2024, that number hit 60,000. But not all growth has been equal. China now produces as much AI research as the US, UK, and EU-27 combined. In terms of research attention, China captured over 40% of global citations in 2024—four times higher than the US and EU individually, and 20 times more than the UK.

    More importantly, China is building a research ecosystem that doesn’t rely on others. While the US, UK, and EU remain tightly linked in AI collaboration, they are all more dependent on China than China is on them. Only 4% of China’s AI publications in 2024 involved collaborators from these regions. By contrast, 25% of the UK’s AI papers included a co-author from China, making China its top research partner.

    Even the US—despite years of political tension and efforts to decouple—maintains its highest AI research ties with China. These relationships have persisted even as legislation like the China Initiative and chip export controls aimed to limit such collaboration.

    Dr Hook argues that AI has become a key tool of geopolitics, similar to energy or defence capabilities. “AI is no longer neutral,” he writes. “Governments are using it as a strategic asset.”

    DeepSeek shows China’s technical independence

    China’s development of DeepSeek, a cost-efficient, open-source chatbot released in early 2025, shows how the country is finding workarounds to chip shortages and positioning itself for leadership. DeepSeek didn’t require expensive GPU training runs and was released under an MIT license—moves that signal both technical skill and confidence.

    Beyond the model itself, DeepSeek is symbolic of something larger. China has built a vast, young, AI-trained population. More than 30,000 AI researchers are currently active in China. Its combined PhD and postdoc base alone is double the size of the entire US AI research population. By comparison, the US has about 10,000 researchers, the EU-27 around 20,000, and the UK roughly 3,000.

    What stands out isn’t just scale—it’s structure. China’s AI workforce is overwhelmingly young, with relatively few senior researchers. This suggests China is investing in long-term capacity, rather than relying on a few high-profile experts. It’s also drawing in talent from overseas. The report notes that China is now a net gainer of AI researchers from countries like the US and UK, reversing earlier trends.

    Research is translating into innovation

    While China’s talent base is growing, its research is also translating into patents and products. The report highlights how China now files nearly ten times more AI-related patents than the US. It’s not just publishing papers—it’s protecting ideas and building businesses around them.

    Geographically, China’s approach is different too. Its AI research is spread widely, not just clustered in a few cities. In 2024, 156 institutions in China published more than 50 AI papers each. These included universities, companies, hospitals, and research centres located in places like Beijing, Shanghai, Nanjing, and Guangzhou. The US had 37 such institutions, the EU-27 had 54, and the UK had 19.

    This nationwide spread shows that China isn’t betting on a few research hubs—it’s building a broad AI infrastructure that reaches across the country. That could make it harder to disrupt or outcompete.

    EU lags, UK punches above its weight

    Meanwhile, Europe shows signs of falling behind. While EU countries collaborate well internally, they’re less connected to outside regions and struggle to turn research into patents or startups. Notably, despite France’s vocal investment in AI since 2018, no French research institution published more than 50 AI papers in 2024. Even top performers like Université de Toulouse fell just short.

    The UK, despite its smaller size, continues to punch above its weight in AI visibility. Its research consistently draws more citations than expected for its volume, showing a high attention-per-output ratio. But it, too, is now leaning heavily on China for collaboration.

    China’s companies are gaining ground

    In terms of corporate involvement, China is also closing in on the US. While US companies still publish more AI papers overall, Chinese companies are gaining fast. The number of research-active companies in China has nearly caught up to the US, suggesting that China’s private sector is becoming a bigger player in AI R&D.

    Dr Hook notes that much of US AI research now happens behind closed doors, in private firms like OpenAI. That makes it harder to track and may distort the picture. Even so, the report’s data shows the US is at risk of losing its narrow lead in research-focused AI startups.

    The global shift has already happened

    The report calls attention to a broader trend: China is not just a competitor—it’s becoming the preferred global connector in AI research. While Western countries still have strong academic networks and commercial pipelines, China’s scale, growth rate, and independence set it apart.

    DeepSeek didn’t appear out of nowhere. It reflects years of investment, education, and infrastructure-building. It also signals what may come next—not just more chatbots, but a wave of AI tools built by a large, skilled workforce, increasingly operating on China’s own terms.

    In the next decade, the report suggests, the real advantage may go to the countries that can not only attract talent and fund research, but also build systems that let that work reach society. China appears to be doing all three—and doing it faster than anyone else.

    The post China surpasses the west in AI research and talent—but how? appeared first on TechWire Asia.

    ]]>
    Qantas says group claims responsibility for frequent flyer data breach https://techwireasia.com/2025/07/qantas-says-group-claims-responsibility-for-frequent-flyer-data-breach/ Tue, 08 Jul 2025 08:30:35 +0000 https://techwireasia.com/?p=242896 Qantas says a cybercriminal has made contact after a breach involving frequent flyer data from up to 6 million customers. The airline is working with police and cybersecurity teams to investigate. Qantas says someone claiming to be behind a recent data breach has reached out to the airline, following an attack that may have exposed […]

    The post Qantas says group claims responsibility for frequent flyer data breach appeared first on TechWire Asia.

    ]]>
  • Qantas says a cybercriminal has made contact after a breach involving frequent flyer data from up to 6 million customers.
  • The airline is working with police and cybersecurity teams to investigate.
  • Qantas says someone claiming to be behind a recent data breach has reached out to the airline, following an attack that may have exposed the personal details of up to 6 million customers.

    In a statement, a Qantas spokesperson said the airline is working to confirm the legitimacy of the contact. The matter has been referred to the Australian Federal Police (AFP), but the company declined to say if a ransom was involved.

    “There is no evidence that any personal data stolen from Qantas has been released,” the spokesperson said. “With the support of specialist cybersecurity experts, we continue to actively monitor.”

    The AFP also confirmed it is investigating and will provide more information at a later stage. “The airline has been highly engaged in assisting authorities and the AFP with investigating this incident,” it said.

    The breach, which occurred on July 2, targeted a third-party system connected to a Qantas call centre. The data potentially accessed includes customer names, email addresses, phone numbers, and dates of birth. The airline says it shut down the suspicious activity quickly, but a significant amount of data may have been taken.

    Qantas added that no credit card, financial, or passport information was involved, and login credentials, such as passwords or PINs, were not accessed. Frequent flyer accounts were also unaffected.

    The identity of the attacker remains unknown. However, the tactics used match those of a group known as Scattered Spider, which has previously been linked to attacks on other large companies, including UK retailer Marks & Spencer.

    Unlike many cybercrime groups based in Russia or Eastern Europe, Scattered Spider is believed to include native English speakers. This has allowed the group to use voice-based social engineering tactics—sometimes called “vishing”—to bypass security systems.

    These attacks often involve calling a company’s IT support, posing as employees or contractors to trick help desk staff into granting access or turning off multi-factor authentication.

    “Native English authenticity can sometimes lead to an automatic sense of trust. There is a level of perceived familiarity that might cause personnel or even IT teams to lower their guard slightly,” said Nathaniel Jones, vice-president of threat research at Darktrace, highlighted by The Guardian.

    In recent months, Scattered Spider has reportedly targeted US airlines using these same tactics.

    Social engineering attacks are becoming more common in Australia. The Office of the Australian Information Commissioner (OAIC) reported that nearly a third of all malicious or criminal data breaches in the second half of last year were linked to social engineering. Government agencies were hit particularly hard, accounting for 60 of the 115 reported incidents—up 46% from the previous period.

    Google has also flagged similar tactics in recent threat reports, pointing to a rise in impersonation-based attacks across multiple sectors.

    The Qantas breach adds to a growing list of cyberattacks that have affected major Australian organisations. Optus, one of the country’s top telecom providers, was hit by a breach that exposed personal information from millions of customers. Medibank, a major health insurer, suffered an attack that resulted in medical data being leaked online.

    There have also been concerns about the security of Australia’s retirement savings system after cybercriminals targeted the $4 trillion superannuation sector.

    These incidents have put more pressure on companies and regulators to strengthen their cybersecurity practices. While many firms are investing in new tools, recent breaches suggest that basic controls—like verifying internal access requests and monitoring third-party systems—still fall short.

    The post Qantas says group claims responsibility for frequent flyer data breach appeared first on TechWire Asia.

    ]]>
    Will China’s DeepSeek face a European ban over data privacy violations? https://techwireasia.com/2025/07/deepseek-privacy-concerns-germany-app-store-ban/ Mon, 07 Jul 2025 09:30:27 +0000 https://techwireasia.com/?p=242886 German data protection officials are pushing for the DeepSeek app store’s removal due to alleged privacy violations and unlawful data transfers to China Italy has already banned the Chinese AI company from its app stores, setting a precedent for potential EU-wide action DeepSeek’s privacy concerns have escalated into a formal regulatory challenge as German authorities […]

    The post Will China’s DeepSeek face a European ban over data privacy violations? appeared first on TechWire Asia.

    ]]>
  • German data protection officials are pushing for the DeepSeek app store’s removal due to alleged privacy violations and unlawful data transfers to China
  • Italy has already banned the Chinese AI company from its app stores, setting a precedent for potential EU-wide action
  • DeepSeek’s privacy concerns have escalated into a formal regulatory challenge as German authorities take decisive action against the Chinese AI company’s data handling practices. Berlin’s commissioner for data protection and freedom of information, Meike Kamp, has officially reported DeepSeek to Apple and Google, demanding the removal of the AI chatbot from their respective app stores over alleged violations of European Union data protection laws.

    The controversy centres on DeepSeek’s transfer of user data to China, which Kamp declared “unlawful” under current EU regulations. In a statement released late last month, the German official accused the Chinese company of failing to provide “convincing evidence” that users’ data was adequately protected, as mandated by European Union law.

    The core of the allegations

    Kamp’s concerns extend beyond simple data transfer issues. She highlighted that “Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies,” pointing to fundamental structural problems with how Chinese tech companies handle international user data. 

    This observation reflects broader geopolitical tensions around data sovereignty and national security. The German official also emphasised that “DeepSeek users in China do not have the enforceable rights and effective legal remedies guaranteed in the European Union,” suggesting that the company’s data protection framework falls short of EU standards regardless of geographical location.

    Under the EU’s General Data Protection Regulation (GDPR), companies are prohibited from transferring data outside the European region unless specific safeguards exist in the destination countries. DeepSeek’s apparent failure to meet these requirements has triggered regulatory action across multiple EU member states.

    A pattern of non-compliance

    The situation has been exacerbated by DeepSeek’s alleged non-cooperation with regulatory authorities. Kamp revealed that her office had previously asked DeepSeek to either comply with EU laws for transferring data outside the bloc or withdraw its app from Germany. The company has reportedly chosen neither option, escalating the regulatory standoff.

    This pattern of non-compliance isn’t isolated to Germany. Italy has already taken decisive action, banning DeepSeek from its app stores in January over similar data protection concerns.

    The country’s data protection authority ordered a block on both Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence — the Chinese companies behind the DeepSeek chatbot — effectively forcing them to stop processing Italian users’ data.

    The Italian ban was reportedly triggered after DeepSeek told authorities it would not cooperate with requests for information, demonstrating a concerning pattern of regulatory resistance.

    Technical and security implications

    Beyond regulatory compliance, studies have identified broader cybersecurity and safety issues with DeepSeek’s technology. Research has shown concerns over DeepSeek-R1’s susceptibility to generating harmful and biased content, raising questions about the platform’s content moderation capabilities and safety protocols.

    The privacy concerns are compounded by China’s legal framework, which grants intelligence agencies broad access to data shared on mobile and web applications. This legal requirement creates inherent conflicts with European data protection principles, making compliance potentially impossible without fundamental structural changes.

    The broader context

    DeepSeek gained significant attention in January when it launched its AI model, claiming development costs were a fraction of competitors’ investments. This cost advantage initially generated industry excitement, but regulatory scrutiny has quickly shifted focus to privacy and security considerations.

    The company’s rapid rise has coincided with increasing global scepticism about Chinese tech companies’ data practices. National security concerns have become paramount as governments worldwide grapple with the implications of allowing foreign AI companies access to citizens’ personal information.

    What happens next

    The immediate decision now rests with Apple and Google, who must review Kamp’s report and determine whether to remove DeepSeek from their app stores. This decision could set a significant precedent for how major tech platforms handle regulatory complaints about data privacy violations.

    If both companies comply with the German request, it would effectively mirror Italy’s approach and could signal broader European consensus on DeepSeek privacy concerns. Such action might encourage other EU member states to take similar measures, potentially creating a continent-wide ban.

    Above all, the case highlights the growing tension between rapid AI innovation and regulatory compliance, particularly when companies operate across different legal jurisdictions with varying data protection standards. 

    For DeepSeek, addressing these concerns may require fundamental changes to its data handling practices or acceptance of reduced market access in privacy-conscious regions.

    The post Will China’s DeepSeek face a European ban over data privacy violations? appeared first on TechWire Asia.

    ]]>
    Fake productivity apps and AI tools used to target SMBs in 2025 https://techwireasia.com/2025/07/fake-productivity-apps-and-ai-tools-used-to-target-smbs-in-2025/ Sat, 05 Jul 2025 02:00:07 +0000 https://techwireasia.com/?p=242873 Fake AI and office apps hit more SMBs in 2025. ChatGPT and Zoom used to spread malware. Thousands of small and medium-sized businesses (SMBs) encountered cyberattacks in 2025 involving fake versions of popular productivity tools, according to new data from Kaspersky. Nearly 8,500 users were affected by malicious or unwanted software posing as legitimate apps […]

    The post Fake productivity apps and AI tools used to target SMBs in 2025 appeared first on TechWire Asia.

    ]]>
  • Fake AI and office apps hit more SMBs in 2025.
  • ChatGPT and Zoom used to spread malware.
  • Thousands of small and medium-sized businesses (SMBs) encountered cyberattacks in 2025 involving fake versions of popular productivity tools, according to new data from Kaspersky. Nearly 8,500 users were affected by malicious or unwanted software posing as legitimate apps — most often Zoom and Microsoft Office. Attackers also began using AI tools like ChatGPT and DeepSeek to trick users into downloading harmful files.

    Kaspersky looked at how often threats were disguised as common online tools. Across 12 apps examined, researchers found over 4,000 unique malicious or suspicious files in 2025. A noticeable rise came from files pretending to be AI tools. ChatGPT-related threats jumped 115% in the first four months of the year compared to the same period in 2024. Kaspersky identified 177 files pretending to be ChatGPT and 83 mimicking DeepSeek, a large language model released in 2025.

    Kaspersky’s Vasily Kolesnikov said attackers tend to go after tools that are widely talked about. “The more publicity and conversation there is around a tool, the more likely a user will come across a fake package on the internet,” he said. Kolesnikov advised SMB employees and everyday users to double-check URLs and avoid suspicious email links or software offers that seem too generous.

    Aside from AI tools, collaboration platforms remain a common disguise for malware. Fake Zoom files rose nearly 13% to 1,652 this year. Threats mimicking Microsoft Teams and Google Drive also climbed — by 100% and 12%, respectively — with 206 and 132 files flagged. These tools have become essential for distributed teams, making them easy targets for impersonation.

    Among the apps reviewed, Zoom stood out as the most copied, accounting for 41% of all detected threats. Microsoft Office apps were also high on the list: Outlook and PowerPoint each made up 16%, Excel nearly 12%, while Word and Teams followed at 9% and 5%.

    Share of unique files with names mimicking the nine most popular legitimate applications in 2024 and 2025
    Share of unique files with names mimicking the nine most popular legitimate applications in 2024 and 2025 (Source – Kaspersky)

    The most common types of threats aimed at SMBs in 2025 were downloaders, trojans, and adware.

    Phishing and spam tactics also on the rise

    Kaspersky also reported a steady stream of phishing scams and spam aimed at SMBs. Many scams attempt to grab login credentials for services like banking platforms or delivery apps. One example involved a fake Google login page offering to promote a business on X — a scheme built to steal user credentials.

    Spam continues to flood inboxes as well. Some messages now claim to offer AI-powered business automation. Others promote email marketing tools, business loans, or services like lead generation and reputation management — all crafted to appeal to small business owners.

    According to Kaspersky, attackers are tailoring these phishing and spam campaigns to match what SMBs typically search for online, making them harder to spot at a glance.

    The post Fake productivity apps and AI tools used to target SMBs in 2025 appeared first on TechWire Asia.

    ]]>
    Meta reorganises AI teams under new superintelligence lab https://techwireasia.com/2025/07/meta-reorganises-ai-teams-under-new-superintelligence-lab/ Tue, 01 Jul 2025 10:00:51 +0000 https://techwireasia.com/?p=242794 Meta formed a new AI unit, Meta Superintelligence Labs. It follows Llama 4 backlash and staff exits as Meta pivots to AGI. Meta has regrouped its artificial intelligence operations under a new unit called Meta Superintelligence Labs (MSL), according to a source familiar with the matter. The decision comes after the company’s latest open-source model, […]

    The post Meta reorganises AI teams under new superintelligence lab appeared first on TechWire Asia.

    ]]>
  • Meta formed a new AI unit, Meta Superintelligence Labs.
  • It follows Llama 4 backlash and staff exits as Meta pivots to AGI.
  • Meta has regrouped its artificial intelligence operations under a new unit called Meta Superintelligence Labs (MSL), according to a source familiar with the matter. The decision comes after the company’s latest open-source model, Llama 4, received poor feedback, and several senior researchers left. With competitors like OpenAI, Google, and China’s DeepSeek gaining ground, Meta is looking to reset its AI strategy and rebuild momentum.

    The new group will be led by Alexandr Wang, the former CEO of Scale AI, who joins Meta as its new chief AI officer. Wang made his name building Scale into a key data provider for many large AI models. He won’t be working alone—Nat Friedman, ex-CEO of GitHub and a known investor in AI startups, will co-lead MSL and focus on product development and applied research. According to the source, Friedman has been advising Meta over the past year and is already familiar with its roadmap.

    Meta CEO Mark Zuckerberg is personally backing the shake-up. Over the past month, he’s taken a direct role in recruiting, sending messages to candidates via WhatsApp and courting several startup founders. One of those was Safe Superintelligence (SSI), co-founded by OpenAI’s Ilya Sutskever. Although a deal with SSI didn’t go through, its co-founder and CEO, Daniel Gross, is said to have joined MSL.

    In early June, Meta also invested $14.3 billion into Scale AI, further tying the company to its new leadership. Zuckerberg hopes the reshaped team will accelerate Meta’s push toward artificial general intelligence (AGI)—a still-hypothetical form of AI that can outperform humans in most tasks—and help create new revenue streams from tools like Meta AI, image-to-video ad generators, and smart wearables.

    MSL is already pulling in talent from top AI labs. The source said Meta has made at least 11 new hires in recent weeks, including names from OpenAI, Google DeepMind, Anthropic, and DeepSeek. Some of the individuals joining include:

    • Jack Rae and Pei Sun, both previously with DeepMind
    • Joel Pobar, returning to Meta after time at Anthropic
    • Jiahui Yu, Shuchao Bi, Shengjia Zhao, and Hongyu Ren from OpenAI
    • Huiwen Chang, who worked on image generation at Google Research
    • Trapit Bansal, known for work on reasoning in AI models
    • Ji Lin and Johan Schalkwyk, both with backgrounds in model architecture and voice systems

    These hires come after OpenAI CEO Sam Altman said Meta had offered some of his staff bonuses of up to $100 million to switch companies.

    Despite the influx of talent, some analysts are sceptical about how soon Meta will see results. Its last big technology bet, Reality Labs, has cost the company more than $60 billion since 2020, with only a few products—such as Ray-Ban smart glasses and Quest headsets—making it into users’ hands.

    AGI remains a long-term goal for the tech industry. Microsoft recently spent $650 million to hire most of Inflection AI’s staff, including co-founder Mustafa Suleyman, while Amazon has been hiring AI researchers from startups like Adept. But the path to AGI is unclear. Meta’s own chief AI scientist, Yann LeCun, has said the current tools and approaches won’t be enough. Meanwhile, SoftBank’s Masayoshi Son believes a breakthrough could happen within a decade.

    In an internal memo on Monday, Zuckerberg said Meta is now focused on building “personal superintelligence for everyone.” The memo confirmed that MSL will combine Meta’s foundation model research, FAIR team, and AI product development under one roof. A new lab within MSL will also start work on a next-generation model aimed at pushing the current technical limits.

    Zuckerberg wrote that Meta is “uniquely positioned” to take this step due to its large user base, experience running global apps, and capacity to invest in large-scale computing. He also pointed to Meta’s early work on AI-powered wearables as another potential edge.

    The memo ended with a call for more staff to join the new lab, and suggested more hires will be announced in the coming weeks.

    Whether this new approach helps Meta catch up with its competitors—or delivers anything close to AGI—remains to be seen.

    The post Meta reorganises AI teams under new superintelligence lab appeared first on TechWire Asia.

    ]]>
    Chinese AI app DeepSeek under fire across Europe and Asia https://techwireasia.com/2025/06/chinese-ai-app-deepseek-under-fire-across-europe-and-asia/ Mon, 30 Jun 2025 07:57:15 +0000 https://techwireasia.com/?p=242785 Germany urges removal of Chinese AI app DeepSeek over privacy rules. DeepSeek faces growing bans over data concerns. Germany is the latest country to take action against DeepSeek, the Chinese AI app that’s facing growing scrutiny in Europe and beyond. On June 27, 2025, Meike Kamp, Germany’s federal data protection commissioner, formally asked Apple and […]

    The post Chinese AI app DeepSeek under fire across Europe and Asia appeared first on TechWire Asia.

    ]]>
  • Germany urges removal of Chinese AI app DeepSeek over privacy rules.
  • DeepSeek faces growing bans over data concerns.
  • Germany is the latest country to take action against DeepSeek, the Chinese AI app that’s facing growing scrutiny in Europe and beyond. On June 27, 2025, Meike Kamp, Germany’s federal data protection commissioner, formally asked Apple and Google to remove the app from their German app stores. She said the app was sending personal data from German users to servers in China without meeting European Union data protection rules.

    Kamp said DeepSeek hadn’t shown that its data handling complies with the General Data Protection Regulation (GDPR). She also pointed to concerns that Chinese authorities could gain access to user data under China’s laws, which require companies to hand over information when asked. Apple and Google haven’t made a public response.

    The request follows months of concern around how DeepSeek handles user data and whether it can be trusted in countries with stricter privacy laws. According to Kamp, her office reached out to DeepSeek earlier this year asking for clarity on where user data is processed and stored. The company failed to respond with sufficient documentation, prompting the formal takedown request.

    A broader pattern of global restrictions

    Germany joins a list of governments with concerns. Italy banned DeepSeek from app stores earlier this year. The Netherlands, Australia, Taiwan, and South Korea have blocked the app on government devices. In the US, lawmakers are considering federal bans on Chinese-developed AI tools, and some federal agencies have already advised employees not to use DeepSeek.

    Some of these restrictions came after government cybersecurity reviews found that DeepSeek’s privacy policy was vague and its data handling practices lacked transparency. In several cases, app permissions requested by DeepSeek included access to microphone and location data, raising further alarms about potential surveillance.

    The responses by organisations reflect deeper tensions between countries trying to advance AI adoption and those trying to guard against national security risks. DeepSeek has become one of the first high-profile AI apps from China to face coordinated scrutiny on multiple continents.

    Privacy, transparency, and legal concerns

    The pressure comes down to privacy, data control, and lack of transparency. DeepSeek hasn’t offered a clear explanation of how it collects, stores, and secures user data. Regulators say that without independent audits or proof of safeguards, there’s too much risk that user information could be misused or accessed by Chinese authorities.

    Under EU law, companies that handle the personal data of European citizens must comply with GDPR, which includes clear guidelines on consent, data minimisation, and secure data storage. According to regulators, DeepSeek has failed to meet these requirements. And without a local data centre or a legal entity in the region, the app remains outside the reach of local enforcement.

    Deepening divide between China and western regulators

    Germany’s concerns reflect a wider rift between Chinese companies and Western regulators. Chinese AI developers have made fast progress, but their entry into Europe and the US is running into more barriers. DeepSeek’s case shows how AI tools are now caught in debates over national security, legal compliance, and public trust.

    Some of the scrutiny stems from China’s own cybersecurity laws, which grant its government wide access to data held by domestic companies. Western regulators worry that this creates a legal pathway for user data from other countries to end up in the hands of Chinese authorities.

    The tension is playing out not only through bans but also through export controls. The US has imposed restrictions on Chinese firms accessing advanced AI chips and cloud infrastructure. In turn, China has begun promoting homegrown alternatives and encouraging domestic firms to reduce reliance on Western platforms.

    APAC reactions reflect regional caution

    Other countries in Asia are also taking a hard look at DeepSeek. South Korea’s data protection agency suspended the app’s availability in February, citing unclear protections for user information. Taiwan’s digital ministry advised public-sector workers not to use the app, while Australia restricted use of the app on government devices.

    In Japan and India, there have been informal reviews of Chinese-built software, although no public action against DeepSeek has been announced. Officials in both countries have stated that they are monitoring developments in Europe and the US closely.

    The decisions show how regulators in the region are becoming more cautious about foreign AI tools – especially when they involve cross-border data storage. The concern is not just with Chinese apps. It’s about making sure that any AI system used locally follows national privacy rules.

    What could replace DeepSeek?

    Governments that restrict DeepSeek haven’t named official replacements. But they may encourage alternatives that are more transparent about data use and storage. That could include open-source models, locally hosted AI tools, or commercial apps that meet privacy laws and give users more control. European agencies may favour tools built around GDPR rules. US institutions might lean toward homegrown apps with clear reporting and audit options.

    Some agencies are exploring smaller models that can be deployed privately on internal systems. Others are encouraging partnerships with academic institutions or public research labs that can build compliant language models for government use. The idea is not just to block unsafe tools, but to build trustworthy alternatives that meet public-sector needs.

    DeepSeek’s silence and stalled progress

    DeepSeek hasn’t responded to Germany’s latest request. The company has made small moves under pressure in the past – for example, pulling the app from Italy’s app stores and naming legal representatives in South Korea – but hasn’t addressed bigger privacy concerns.

    Reports say the company has been working on a new version of its model, DeepSeek R2, but the rollout is delayed. Sources have told Reuters that DeepMind’s team wasn’t satisfied with the model’s performance and was facing limits due to chip restrictions: US export rules have made it harder for Chinese firms to access high-end AI hardware.

    Industry analysts say the DeepSeek R2 delay could hurt the company’s ability to respond to growing demand for AI services. Without access to newer chips or major cloud infrastructure providers, DeepSeek may struggle to scale up or improve its performance to match those of its rivals.

    Trust, data, and the future of global AI tools

    The response to DeepSeek highlights how countries are drawing lines around AI use. In the EU, strict privacy laws shape who can operate and how. In the US, the debate centres more on security and global competition. In Asia, governments are weighing openness with tighter oversight.

    For Chinese AI developers, DeepSeek’s case could be a turning point. Countries are no longer willing to accept unclear terms or cross-border risks. Firms looking to expand into global markets now face a clear message: follow local rules, keep data accessible to regulators, and prove your systems are safe.

    At the time of writing, DeepSeek is still available in many places. But each new restriction makes it harder for the company to operate internationally without major changes to its policies. If Apple and Google agree to Germany’s request, other countries may take similar steps. That could shut DeepSeek out of much of the European market.

    The broader message is that AI tools are now judged by more than just features or performance. Where the data goes, who can see it, and whether users understand what’s happening all matter just as much. And for governments, those answers will determine which apps are available – and which ones aren’t.

    The post Chinese AI app DeepSeek under fire across Europe and Asia appeared first on TechWire Asia.

    ]]>
    Five months in and no TikTok ban, despite Republican fury https://techwireasia.com/2025/06/tiktok-ban-extension-trump-third-delay-republican-opposition/ Thu, 19 Jun 2025 14:34:30 +0000 https://techwireasia.com/?p=242716 Trump announces further 90-day TikTok ban extension despite growing Republican criticism. Delay as US-China trade talks continue. TikTok increasingly viewed as a bargaining chip in broader diplomatic negotiations. President Donald Trump will sign another executive order this week granting TikTok a third ban extension of 90 days despite mounting opposition from Republican senators who view […]

    The post Five months in and no TikTok ban, despite Republican fury appeared first on TechWire Asia.

    ]]>
  • Trump announces further 90-day TikTok ban extension despite growing Republican criticism.
  • Delay as US-China trade talks continue.
  • TikTok increasingly viewed as a bargaining chip in broader diplomatic negotiations.
  • President Donald Trump will sign another executive order this week granting TikTok a third ban extension of 90 days despite mounting opposition from Republican senators who view the popular social media platform as a national security threat.

    The latest TikTok ban extension announcement came Tuesday via White House Press Secretary Karoline Leavitt, who stated that “President Trump will sign an additional Executive Order this week to keep TikTok up and running.”

    The extension will provide the administration with another 90 days “working to ensure this deal is closed so that the American people can continue to use TikTok with the assurance that their data is safe and secure.”

    This is Trump’s third delay of the sale-or-ban law since taking office in January, highlighting the complex geopolitical dynamics surrounding the Chinese-owned platform that serves 170 million American users.

    Republican Senators express frustration

    The decision has drawn criticism from key Republican lawmakers who championed the original bipartisan legislation requiring TikTok’s Chinese parent company ByteDance to divest itself of the platform or face a ban in the United States.

    “I’m not overly delighted,” Armed Services Chair Roger Wicker told reporters regarding the delay. “I don’t think it’s a good idea.”

    Senator Josh Hawley expressed similar concerns, telling Axios: “That’s not my favourite thing. I’m fine with him trying to sell it, that’s fine, but I think at a certain point we’ve got to enforce this law.”

    Senator John Cornyn was more direct, stating: “I’d like to see the law go into effect.”

    The pushback reflects broader Republican concerns about China’s potential use of TikTok for espionage and propaganda purposes.

    “China has used TikTok for espionage and propaganda,” said Senator Ted Cruz. “That’s why Congress overwhelmingly passed legislation to force the Chinese Communist government to divest, and it is my hope and expectation that that’s what’s going to happen.”

    Timeline of extensions and failed negotiations

    The sale-or-ban law technically went into effect on January 19, 2025, after being signed by former President Joe Biden. However, TikTok’s presence in the US has been marked by uncertainty rather than enforcement.

    TikTok briefly took itself offline, sparking outcry from users, but quickly returned after President Trump signed an executive order delaying the ban’s enforcement by 75 days – one of his first acts as President.

    In April, a deal that would have transferred majority control of TikTok’s US operations to American ownership was nearly finalised. The arrangement would have involved several American venture capital funds, private equity firms, and tech giants investing in a company that was to control TikTok’s US operations, while ByteDance was to retain a 20% stake.

    However, the deal collapsed after Trump announced additional tariffs on China. “There are key matters to be resolved. Any agreement will be subject to approval under Chinese law,” ByteDance said after Trump’s tariff policy stalled progress.

    Several high-profile bidders have expressed interest in acquiring the platform, including a group led by billionaire Frank McCourt and “Shark Tank” investor Kevin O’Leary, Amazon, AI firm Perplexity, and a separate group that included YouTube and TikTok star Jimmy Donaldson (MrBeast).

    China’s algorithm stance complicates negotiations

    The Chinese government has offered little public indication of willingness to approve a sale beyond suggesting that any deal could not include TikTok’s “algorithm” – widely considered the app’s competitive advantage.

    Trump acknowledged this challenge Tuesday, telling reporters that a TikTok deal would “probably” require approval by the Chinese government. “I think President Xi will ultimately approve it, yes,” the US president added.

    The algorithm issue represents a significant sticking point, as it’s unclear whether American buyers would find TikTok valuable without its sophisticated content recommendation system.

    Trade talks context

    The latest TikTok ban extension comes as the US and China have agreed a framework to ease export controls, a move expected to reduce tensions between the two countries. While it’s unclear whether a TikTok deal is included in the framework, improved cooperation could facilitate an agreement to transfer control of the app to US buyers.

    Senate Majority Leader John Thune told Axios he’s “hoping that the negotiations on a buyer are making headway enough” to find a suitable match, but acknowledged “I don’t think they have yet.”

    What’s next?

    The current extension will expire in mid-September, setting up another potential deadline for the Trump administration. However, the pattern of repeated delays suggests the president may continue prioritising diplomatic flexibility over enforcement.

    Senator Mike Rounds noted the issue is “probably taking second place to everything else going on in the world,” but emphasised that “at some point” TikTok will have to be eliminated from the US market – either through sale or ban.

    The ongoing delays reflect Trump’s changeable position on TikTok. During his previous administration, he first attempted to ban the platform, but has since said he changed his mind after he “got to use it.” The shift was symbolically represented when TikTok CEO Shou Chew attended Trump’s inauguration, seated alongside Cabinet secretaries and other tech CEOs.

    As the September deadline approaches, the fundamental tension remains unresolved: balancing politically-charged national security concerns with the platform’s massive American user base, while navigating complex US-China relations. TikTok has become a bargaining chip in broader trade negotiations.

    Yet perhaps the most intriguing question is whether Trump finds himself trapped by his promises. Having committed publicly to keeping TikTok “alive” and courted its massive user base during his campaign, the President may now be too politically invested in finding a face-saving solution to simply reverse course and enforce the ban his party demands.

    Each extension deepens the predicament, making it increasingly difficult to abandon a platform he once championed without appearing to flip-flop on a signature issue.

    The post Five months in and no TikTok ban, despite Republican fury appeared first on TechWire Asia.

    ]]>