AI & Intelligence | Tech Wire Asia | Latest Updates & Trends https://techwireasia.com/category/artificial-intelligence/ Where technology and business intersect Wed, 10 Sep 2025 15:27:17 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.2 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png AI & Intelligence | Tech Wire Asia | Latest Updates & Trends https://techwireasia.com/category/artificial-intelligence/ 32 32 NVIDIA introduces Rubin CPX GPU for long-context AI https://techwireasia.com/2025/09/nvidia-introduces-rubin-cpx-gpu-for-long-context-ai/ Wed, 10 Sep 2025 10:00:44 +0000 https://techwireasia.com/?p=243601 NVIDIA Rubin GPU targets long-context AI, from million-token coding to video. Packs 30 petaflops, 128GB memory, and 3x faster attention. NVIDIA has introduced the Rubin CPX, a new type of GPU designed for long-context AI processing. The chip is built to handle workloads that require models to process millions of tokens at once – whether […]

The post NVIDIA introduces Rubin CPX GPU for long-context AI appeared first on TechWire Asia.

]]>
  • NVIDIA Rubin GPU targets long-context AI, from million-token coding to video.
  • Packs 30 petaflops, 128GB memory, and 3x faster attention.
  • NVIDIA has introduced the Rubin CPX, a new type of GPU designed for long-context AI processing. The chip is built to handle workloads that require models to process millions of tokens at once – whether that’s generating code in entire software projects or working with video content an hour in length.

    Rubin CPX works alongside NVIDIA’s Vera CPUs and Rubin GPUs inside the Vera Rubin NVL144 CPX platform. A single rack delivers 8 exaflops of AI compute, with 7.5 times the performance of NVIDIA’s GB300 NVL72 system, along with 100 terabytes of fast memory and bandwidth of 1.7 petabytes per second. Customers can also adopt Rubin CPX in other system configurations, including options that reuse existing infrastructure.

    Jensen Huang, NVIDIA’s founder and CEO, described the launch as a turning point: “The Vera Rubin platform will mark another leap in the frontier of AI computing – introducing both the next-generation Rubin GPU and a new category of processors called CPX. Just as RTX revolutionised graphics and physical AI, Rubin CPX is the first CUDA GPU purpose-built for massive-context AI, where models reason in millions of tokens of knowledge at once.”

    From coding to video generation

    The demand for long-context processing is growing fast. Today’s AI coding assistants are limited to smaller code blocks, but Rubin CPX is designed to manage far larger projects, with the ability to scan and optimise entire repositories. In video, where an hour of content can take up to one million tokens, Rubin CPX combines video encoding, decoding, and inference processing in a single chip, making tasks like video search and generative editing more practical.

    The GPU uses a monolithic die design based on the Rubin architecture, packed with NVFP4 compute resources for high efficiency. It delivers up to 30 petaflops of compute with NVFP4 precision and comes with 128GB of GDDR7 memory. Compared to NVIDIA’s GB300 NVL72 system, it provides triple the speed for attention mechanisms, helping models handle longer sequences without slowing down.

    Rubin CPX can be deployed with NVIDIA’s InfiniBand fabric or Spectrum-X Ethernet networking for scale-out computing. In the flagship NVL144 CPX system, NVIDIA says customers could generate $5 billion in token revenue for every $100 million invested.

    Beyond Rubin CPX itself, NVIDIA also highlighted results from the latest MLPerf Inference benchmarks. Blackwell Ultra set records on new reasoning benchmarks like DeepSeek R1 and Llama 3.1 405B, with NVIDIA being the only platform to submit results for the most demanding interactive scenarios.

    The company also set records on the new Llama 3.8B tests, Whisper speech-to-text, and graph neural networks. NVIDIA credited these wins partly to a technique called disaggregated serving, which separates the compute-heavy context phase of inference from the bandwidth-heavy generation phase. By optimising them independently, throughput per GPU rose by nearly 50%.

    NVIDIA tied these results directly to AI economics. For example, a free GPU with a quarter of Blackwell’s performance could generate about $8 million in token revenue over three years, while a $3 million investment in GV200 infrastructure could generate around $30 million. The company framed performance as the key lever for AI factory ROI.

    Rubin CPX is positioned as the first of a new class of “context GPUs” in NVIDIA’s roadmap. The company is also advancing the Rubin Ultra GPU, Vera CPUs, NVLink Switch, Spectrum-X Ethernet, and CX9 SuperNICs – all designed to work together in a one-year upgrade cycle. The roadmap emphasises NVIDIA’s push for full-stack solutions, not just standalone chips.

    Industry adoption

    Several AI companies are preparing to use Rubin CPX. The company behind Cursor, the AI-powered code editor, expects the GPU to support faster code generation and developer collaboration. “With NVIDIA Rubin CPX, Cursor will be able to deliver lightning-fast code generation and developer insights, transforming software creation,” said Michael Truell, CEO of Cursor.

    Runway, a generative AI company focused on video, sees Rubin CPX as central to its work on creative workflows. “Video generation is rapidly advancing toward longer context and more flexible, agent-driven creative workflows,” said Cristóbal Valenzuela, CEO of Runway. “We see Rubin CPX as a major leap in performance, supporting these demanding workloads to build more general, intelligent creative tools.”

    Magic, an AI company building foundation models for software agents, highlighted the ability to process larger contexts. “With a 100-million-token context window, our models can see a codebase, years of interaction history, documentation and libraries in context without fine-tuning,” said Eric Steinberger, CEO of Magic.

    Supported by NVIDIA’s AI stack

    Rubin CPX is integrated into NVIDIA’s broader AI ecosystem, including its software stack and developer tools. The GPU will support the company’s Nemotron multimodal models, delivered through NVIDIA AI Enterprise. It also connects with the Dynamo platform, which helps improve inference efficiency and reduce costs for production AI.

    Backed by CUDA-X libraries and NVIDIA’s community of millions of developers, Rubin CPX extends the company’s push into large-scale AI, offering new tools for industries where context size and performance define what’s possible.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post NVIDIA introduces Rubin CPX GPU for long-context AI appeared first on TechWire Asia.

    ]]>
    Alibaba’s trillion-parameter AI model challenges OpenAI and Google’s dominance https://techwireasia.com/2025/09/alibaba-ai-model-trillion-parameter-breakthrough/ Tue, 09 Sep 2025 10:00:00 +0000 https://techwireasia.com/?p=243591 Alibaba’s AI model breakthrough enters trillion-parameter territory, joining OpenAI and Google in elite AI competition Premium pricing reflects computational complexity as China demonstrates growing AI capabilities against Western rivals Alibaba Group Holding has unveiled its most ambitious artificial intelligence model to date, with the new Qwen-3-Max-Preview marking a significant milestone in China’s efforts to challenge […]

    The post Alibaba’s trillion-parameter AI model challenges OpenAI and Google’s dominance appeared first on TechWire Asia.

    ]]>
  • Alibaba’s AI model breakthrough enters trillion-parameter territory, joining OpenAI and Google in elite AI competition
  • Premium pricing reflects computational complexity as China demonstrates growing AI capabilities against Western rivals
  • Alibaba Group Holding has unveiled its most ambitious artificial intelligence model to date, with the new Qwen-3-Max-Preview marking a significant milestone in China’s efforts to challenge Western dominance in the AI sector. The Alibaba AI model breakthrough marks the company’s first entry into trillion-parameter territory, positioning it directly in competition with industry leaders OpenAI and Google DeepMind.
    Released on Friday through Alibaba’s cloud services platform and the OpenRouter marketplace, Qwen-3-Max-Preview boasts more than one trillion parameters—the variables that essentially encode an AI system’s intelligence and are fine-tuned during training. This represents a substantial leap from the company’s previous offerings in the Qwen3 series, which ranged from 600 million to 235 billion parameters when first launched in May.
    The scale of this achievement becomes clearer when viewed against the broader competitive landscape. While OpenAI’s GPT-4.5 is estimated to contain between five to seven trillion parameters, Alibaba’s entry into the trillion-parameter club signals China’s growing technical sophistication in AI development.
    For a company that has traditionally focused on e-commerce and cloud services, this represents a strategic pivot toward cutting-edge AI research. According to Alibaba’s internal testing, the new model outperforms its previous flagship, the Qwen3-235B-A22B-2507 released in July.
    More significantly, the company claims Qwen-3-Max-Preview has bested several international competitors across five benchmarks, including MoonShot AI’s Kimi K2, a non-reasoning version of Anthropic’s Claude Opus 4, and DeepSeek V3.1. However, these comparisons should be viewed with appropriate skepticism, as they were not published as part of an official technical report.
    The technical improvements are notable across multiple dimensions. “Qwen3-Max-Preview shows substantial gains … in overall capability, with significant enhancements in Chinese-English text understanding, complex instruction following, handling of subjective open-ended tasks, multilingual ability, and tool invocation,” Alibaba stated.
    Qwen’s official post confidently added that “scaling works – and the official release will surprise you even more.” From a business perspective, the pricing structure reveals the economic realities of operating such sophisticated models. At $0.861 per million input tokens and $3.441 per million output tokens, Qwen-3-Max-Preview commands premium rates compared to its predecessors.
    https://x.com/Alibaba_Qwen/status/1964004112149754091
    This represents roughly three times the cost of the previous Qwen3-235B-A22B-2507 model, reflecting the substantial computational resources required to run trillion-parameter systems. The strategic implications extend beyond Alibaba’s corporate ambitions. China’s AI sector has faced significant challenges, including export restrictions on advanced semiconductors and concerns about technological dependence on Western suppliers.
    The successful deployment of this Alibaba AI model breakthrough demonstrates that Chinese companies can develop sophisticated AI systems despite these constraints. Alibaba’s broader AI strategy appears increasingly aggressive. The company has committed 380 billion yuan (US$52 billion) to AI infrastructure investments over the next three years—an amount exceeding its total AI spending over the past decade.
    This investment is already showing returns, with AI-related products achieving triple-digit growth for eight consecutive quarters according to the company’s latest financial results. The success of Alibaba’s Qwen models in the open-source community provides additional context for this achievement.
    With more than 20 million downloads and 100,000 derivative models on Hugging Face, Qwen has established itself as a leading force in the global open-source AI ecosystem. However, Qwen-3-Max-Preview notably breaks from this open-source tradition, remaining proprietary and accessible only through official channels.
    Looking ahead, Alibaba AI engineer Binyuan Hui indicated that a “thinking” version of the model is “on the way,” suggesting further enhancements to reasoning capabilities are in development. This aligns with industry trends toward more sophisticated AI systems capable of complex reasoning and problem-solving.
    The broader implications for the global AI landscape are significant. As Chinese companies like Alibaba demonstrate increasing capability in developing frontier AI models, the competitive dynamics of the industry are shifting. While Western companies have maintained technological leadership, the gap appears to be narrowing, with potential implications for national security and economic competitiveness.
    For technology professionals and industry observers, Alibaba’s trillion-parameter milestone represents more than just another model release—it signals China’s growing maturation as an AI powerhouse capable of competing at the highest levels of technological sophistication.

    The post Alibaba’s trillion-parameter AI model challenges OpenAI and Google’s dominance appeared first on TechWire Asia.

    ]]>
    How UAE’s new AI curriculum compares to education initiatives worldwide https://techwireasia.com/2025/09/how-uaes-new-ai-curriculum-compares-to-education-initiatives-worldwide/ Mon, 08 Sep 2025 08:45:59 +0000 https://techwireasia.com/?p=242378 UAE joins creates AI education curriculum. Mandatory classes for students as young as four. China, Estonia, and others take varied approaches. Success to depend on implementation quality. The United Arab Emirates has announced plans to introduce AI education in curriculum in all government schools, making AI a mandatory subject from kindergarten through to grade 12, […]

    The post How UAE’s new AI curriculum compares to education initiatives worldwide appeared first on TechWire Asia.

    ]]>
  • UAE joins creates AI education curriculum. Mandatory classes for students as young as four.
  • China, Estonia, and others take varied approaches. Success to depend on implementation quality.
  • The United Arab Emirates has announced plans to introduce AI education in curriculum in all government schools, making AI a mandatory subject from kindergarten through to grade 12, starting next academic year.

    The initiative, announced by Sheikh Mohammed bin Rashid, Vice President and Ruler of Dubai, aims to provide children as young as four with understanding of AI technologies, principles, and ethical considerations.

    “Our goal is to teach our children a deep understanding of AI from a technical perspective, while also fostering their awareness of the ethics of this new technology, enhancing their understanding of its data, algorithms, applications, risks, and its connection to society and life,” Sheikh Mohammed stated on May 4.

    Global context: A competitive landscape

    The UAE’s announcement comes amid a global rush by multiple nations to integrate AI education into their curricula, with each country adopting distinctive approaches based on their educational priorities and technological aspirations.

    In Beijing, China, primary and secondary school pupils will receive a minimum of eight hours of AI-focused lessons each academic year beginning this autumn. Children as young as six will learn how to engage with AI-powered tools, gain a foundational understanding of the technology, and explore ethical considerations surrounding its use.

    The Beijing Municipal Education Commission recently announced that schools may integrate these lessons into existing subjects like information technology and science or offer them as standalone courses.

    Its plan includes developing a multi-year AI curriculum, establishing a structured education and training system, introducing support initiatives, and promoting awareness of the program. China’s approach is particularly notable as it has already taken concrete steps toward implementation.

    Last December, China’s Ministry of Education selected 184 schools to trial AI-focused curricula, forming the basis for future expansion. According to Minister Huai Jinpeng, AI represents a crucial component of China’s educational strategy.

    Estonia, with its already strong digital education foundation, is taking a different path. Estonia’s government recently partnered with OpenAI to introduce AI-driven education tools to secondary school pupils and teachers. From September, students in Years 10 and 11 will gain access to customised AI learning platforms, with additional support for educators in lesson planning and administrative tasks.

    Comparing approaches

    While the UAE curriculum appears comprehensive on paper, with 7 key areas spanning from foundational concepts to community engagement, it’s important to note that there has been no announcement yet on whether private schools, which are regulated separately, will be instructed to roll out AI classes.

    This contrasts with China’s approach which appears to be moving toward broader implementation beyond pilot programmes. The UAE’s plan is ambitious in its age range, starting with four-year-olds, which is younger than many other programmes globally.

    The curriculum is broken into three cycles, with tailored units for each age group. Four-year-olds will engage in visual and interactive activities to discover AI through play, while older students progress to comparing machines to humans, designing their own AI systems, andtually learning command engineering with real-world scenarios.

    South Korea and Canada have taken different approaches, incorporating AI into existing school curricula, offering AI-powered learning materials and classroom tools for teachers rather than creating entirely new subject areas. The integration model may prove more practical for educational systems that face challenges in adding new subjects to already crowded curricula.

    Critical assessment

    What distinguishes the various approaches is not necessarily which country is “leading,” but rather how each nation’s AI education strategy aligns with its broader technological and economic goals.

    For the UAE, the emphasis appears to be on creating a framework that starts from the earliest years of education. Sarah Al Amiri, Minister of Education, described the integration of AI in education as a “national imperative” that “supports economic growth, fosters sustainable development and significantly enhances individual capabilities.”

    However, experts might question whether starting AI education at age four is developmentally appropriate, or if the UAE’s education system has the necessary infrastructure and teacher training to deliver such an ambitious curriculum effectively. The practical considerations will determine the program’s success beyond the ambitious announcement.

    China’s approach benefits from the country’s established technological infrastructure and its significant investments in AI research and development, potentially giving it advantages in implementation. The selection of 184 schools for trial programmes demonstrates a methodical approach focused on gathering data before broader implementation.

    In the UK, the approach has been more fragmented with at least one private school launching an experimental learning space where students use virtual reality headsets and AI platforms instead of traditional teaching methods. This reflects a more market-driven approach compared to the centralised government initiatives seen in the UAE, China, and Estonia.

    Balancing technology and pedagogy

    All these initiatives face similar challenges regarding the balance between technological innovation and sound pedagogical approaches. While AI can transform education by making learning more accessible and personalised, education authorities worldwide remain cautious.

    The United Nations has highlighted the importance of responsible AI implementation, recommending clear guidelines, inclusivity, and a focus on human-centred learning.

    The rush to implement AI education also raises questions about equity and access. Will these programmes exacerbate existing digital divides between well-resourced and under-resourced schools? Will all teachers receive adequate training to deliver these curricula effectively?

    Looking forward

    Rather than crowning any nation as the definitive leader in AI education, it’s more accurate to observe that there is something of a global recognition of AI litreacy as a component of future education. Each country’s approach reflects its unique educational philosophy, technological capabilities, and strategic priorities.

    The UAE’s ambitious programme, China’s methodical implementation, Estonia’s partnership model, and other nations’ varying approaches will provide data on the results of AI education strategies. The true test will be in implementation, teacher training, curriculum quality, and ultimately, student outcomes.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post How UAE’s new AI curriculum compares to education initiatives worldwide appeared first on TechWire Asia.

    ]]>
    CreateAI’s CEO on how AI is changing China’s animation and gaming https://techwireasia.com/2025/09/createai-ceo-on-how-ai-is-changing-china-animation-and-gaming/ Mon, 08 Sep 2025 08:21:06 +0000 https://techwireasia.com/?p=243582 Chinese gaming and animation find global audiences. CreateAI’s CEO says AI and culture will shape China’s entertainment role. China’s gaming and animation industries are no longer just local success stories. With titles like Black Myth: Wukong and Ne Zha 2 drawing attention from global audiences, the country has shown its ability to produce content that […]

    The post CreateAI’s CEO on how AI is changing China’s animation and gaming appeared first on TechWire Asia.

    ]]>
  • Chinese gaming and animation find global audiences.
  • CreateAI’s CEO says AI and culture will shape China’s entertainment role.
  • China’s gaming and animation industries are no longer just local success stories. With titles like Black Myth: Wukong and Ne Zha 2 drawing attention from global audiences, the country has shown its ability to produce content that travels well beyond its borders. At the same time, rapid adoption of artificial intelligence is reshaping how stories are made, blending machine efficiency with human creativity.

    Tech Wire Asia spoke with Cheng Lu, President and CEO of CreateAI, about how Chinese studios are redefining entertainment, the role of AI in production, and what the future holds for immersive storytelling.

    China’s growing influence in global gaming and animation

    Cheng Lu, President and CEO of CreateAI
    Cheng Lu, President and CEO of CreateAI

    Asked about China’s role in shaping global entertainment over the next decade, Cheng points to the momentum already visible in the market. “China’s gaming and animation industries are gaining global dominance. Blockbusters like Black Myth: Wukong and Ne Zha 2 are showcasing our ability to captivate audiences worldwide,” he said.

    That influence is backed by strong numbers. According to the “2025 H1 China Game Industry Report” from the Game Publishing Committee, the industry generated RMB 168 billion (USD 23 billion) in sales revenue in the first half of 2025 – a 14% year-on-year increase. Chinese self-developed games earned USD 9.5 billion overseas, underscoring their reach in a global market worth around USD 250 billion.

    Cheng sees the next decade as one where Chinese studios can lead in immersive, AI-enhanced storytelling. “Fueled by rapid AI adoption and investment in creative production, Chinese studios are redefining entertainment with innovative technology and cultural storytelling,” he explains. “Over the next decade, China is likely to lead in immersive, AI-enhanced storytelling and cross-platform experiences, shaping global trends by blending technology with cultural heritage.”

    Balancing AI efficiency and human artistry

    AI has become more visible in creative production pipelines, raising the question of how to balance machine efficiency with human artistry. Cheng notes that animation, especially 2D, has long been labour-intensive. Scriptwriting, keyframing, and colouring often slow production, particularly when there’s a shortage of skilled animators.

    Here, AI can make a difference without displacing artists. “AI can help studios produce higher-quality content more efficiently without sidelining human creators,” Cheng says. Tools like Animon.ai can turn simple images into anime videos, reducing repetitive tasks and giving artists more time to focus on storytelling.

    The idea of synergy – AI handling technical tasks while humans guide the creative heart – runs through Cheng’s perspective. He points to the company’s recent release for example: “Our Animon.ai Studio Version, launched in July, exemplifies this approach by providing creators with tools like high-quality 2K visual generation and consistent keyframe editing, enabling both professionals and beginners to streamline workflows while retaining full creative control.”

    Making local stories travel

    Chinese titles often draw from local myths and traditions but still manage to resonate with audiences around the world. Cheng sees lessons here for other markets.

    He highlights CreateAI’s game Heroes of Jin Yong, which reflects the role of chivalry in Chinese wuxia culture. “Chivalry is something that resonates in the world, but manifests uniquely in China with wuxia culture,” he explains. The broader lesson, he says, is to take cultural dynamics rooted in one place and show how they reflect universal human experience.

    “All markets can consider cultural dynamics inherent to themselves that are shared in the human experience, and show how those come to fruition in their unique culture,” Cheng says. Done well, this approach lets people outside the culture connect to familiar values while sparking curiosity about a new one.

    The future of immersive entertainment

    Advances in motion capture, real-time rendering, and AI animation are already changing the entertainment industry. Cheng expects those shifts to accelerate. He outlines two big trends:

    1. The gamification of everything. “Top content is becoming more immersive and interactive,” Cheng says.
    2. Cross-media synergies. “Blending between what is a video game and TV show, and video game IP are being made into anime shows, and vice versa.”

    At CreateAI, the team is exploring both directions. With the Three-Body Problem franchise, they are working on an anime feature film and a AAA video game, based on the second book of the series. The goal is to launch them side by side. “The generates maximum consumer exposure and greatly enhances fan experience,” Cheng says.

    As AI gains the ability to generate characters, voices, and entire worlds, questions about authenticity and ethics are unavoidable. Cheng is clear on this point: “We believe in ‘safe AI’ and will do our part to promote the generative AI industry to grow according to high ethical values and local regulations.”

    Asked to compare how AI and gaming innovation differ in regions, Cheng avoids making sweeping claims. “AI and gaming are truly global industries facing global competition,” he says. Still, he notes that companies succeed by excelling in three areas: creating compelling intellectual property, applying new technology for efficiency or storytelling, and using effective distribution channels.

    Skills the next generation of China’s gaming creators will need

    Looking ahead, Cheng sees a need for a new mix of skills among animators, developers, and storytellers. “The next generation of animators, game developers, and storytellers will need to blend technical proficiency with creative adaptability to thrive in an AI-driven industry,” he says.

    A key mindset is to see AI as a tool for expansion rather than replacement. “Viewing AI as an opportunity to create more content, rather than a replacement, is key,” Cheng stresses. He also underlines the importance of cultural sensitivity and narrative innovation – qualities that make stories resonate beyond their home market.

    Opportunities in China’s gaming and animation with AI

    When asked what excites him most about the future, Cheng points to AI’s potential to open creation to more people. “At CreateAI, what excites us most is AI’s potential to democratise creation and deliver deeply personalised, immersive experiences,” he says.

    “Our tools allow any fan to become a creator, freeing new opportunities in the creative economy, while adaptive narratives create immersive experiences,” Cheng explains. He sees breakthroughs coming from cross-platform projects, using global IPs like Heroes of Jin Yong and The Three-Body Problem to build both anime and AAA games.

    That approach, he says, creates communities of fans and creators who help redefine how stories are consumed. “We have a full pipeline of projects currently, but we are always opportunistic to work with strong IP holders, using our technology and development know-how to innovate and bring immersive content to a global audience.”

    Closing thoughts

    From China’s expanding influence in the global gaming market to the role of AI in reshaping production, Cheng Lu’s perspective underscores how technology and cultural storytelling are increasingly intertwined. The future of entertainment, he suggests, won’t be about choosing between AI or human creativity but about finding new ways for them to work together.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post CreateAI’s CEO on how AI is changing China’s animation and gaming appeared first on TechWire Asia.

    ]]>
    Apple’s thinnest iPhone may face delay in China over eSIM readiness https://techwireasia.com/2025/09/apples-thinnest-iphone-may-face-delay-in-china/ Thu, 04 Sep 2025 11:00:06 +0000 https://techwireasia.com/?p=243470 Apple’s iPhone 17 Air may face delays in China. China’s eSIM system still in testing, limited carrier support. Apple’s thinnest iPhone may take longer to reach Chinese consumers as the country’s eSIM system is not yet ready for nationwide launch. The setback could delay availability of the iPhone 17 Air, a model expected to debut […]

    The post Apple’s thinnest iPhone may face delay in China over eSIM readiness appeared first on TechWire Asia.

    ]]>
  • Apple’s iPhone 17 Air may face delays in China.
  • China’s eSIM system still in testing, limited carrier support.
  • Apple’s thinnest iPhone may take longer to reach Chinese consumers as the country’s eSIM system is not yet ready for nationwide launch. The setback could delay availability of the iPhone 17 Air, a model expected to debut this month alongside the rest of the iPhone 17 lineup, according to the South China Morning Post.

    At two Apple-authorised resellers in Foshan, Guangdong province, shop assistants said they had not received training on how to support eSIM. By comparison, Apple resellers in the European Union were asked to complete an eSIM training course by last week, MacRumors reported. The difference highlights how China may not be ready to match the iPhone’s global rollout schedule.

    On Chinese social media, concerns about delays are gaining traction. Tech influencer Fixed Focus Digital, who has 2.3 million Weibo followers, said Wednesday that eSIM services in mainland China were “unlikely” to go live this month. He pointed out that mass production of the slimmer model, thought to be called the iPhone 17 Air, began later than the standard iPhone 17, the iPhone 17 Pro, and the iPhone 17 Pro Max. He also downplayed the impact, saying that later availability “isn’t problematic.”

    Apple is still expected to reveal the iPhone 17 series on September 9, an event that typically draws global attention and sets the tone for the company’s holiday season sales.

    China carriers send mixed signals on iPhone eSIM rollout

    Hints of preparation have emerged from China Unicom, one of the country’s three state-owned telecom operators. Weibo user ChillsYaya wrote last week that the carrier had told staff to offer eSIM support for Apple devices. But it was unclear if the directive covered smartphones, since China Unicom already supports eSIM for iPads and Apple Watches. Neither Apple nor China Unicom responded to requests for comment on Wednesday.

    In July, China Unicom updated its “5G AI terminal white paper” with eSIM phone specifications, a move seen as preparation for future iPhone support in China. Around the same time, users discovered a beta webpage for eSIM activation, though it only pointed customers to offline stores and gave no clear details on locations.

    China’s three major carriers – China Unicom, China Telecom, and China Mobile – have been cautious about smartphone eSIM services. According to a July report from the state-backed China Business Journal, the operators have focused their eSIM development on wearables for now, while smartphone integration remains on hold.

    Broader competition on thin designs

    The eSIM issue in China extends beyond Apple’s iPhone. Rival smartphone makers are also working on thinner devices that rely on eSIM technology. Xiaomi, for example, lists 16 smartphones with eSIM support in overseas markets, suggesting that competition around slimmer hardware is already under way.

    Apple’s September event will not just be about this year’s lineup. The company is beginning a three-year refresh cycle for its most important product. Reports suggest that next year may bring Apple’s first foldable iPhone, a move that would follow similar launches by Samsung Electronics and Google.

    Siri’s next upgrade

    Alongside hardware, Apple is also preparing major software changes.

    Bloomberg‘s Mark Gurman reported that Apple is building an AI-powered search feature for Siri, internally known as “World Knowledge Answers,” a change that will affect how future iPhones function in China and elsewhere. The feature would pull information from the web and deliver AI-generated summaries, presented with text, photos, videos, and location details.

    To make this work, Apple may have to rely on third-party services. Google is currently the front-runner to provide an AI model – likely from its Gemini family – that would run on Apple’s servers. The two companies reached a “formal agreement” this week for Apple to test Google’s model for Siri summaries, according to the publication.

    The planned Siri upgrade is part of Apple’s broader but delayed effort to expand the voice assistant’s functions. The new system is expected to rely on three parts: a planner that interprets user prompts, a search engine that can scan personal data or the internet, and a ‘summariser’ that packages the results in a usable format.

    Apple is still considering other partners. While its own AI models are expected to handle personal data searches, the company is evaluating Google’s Gemini and Anthropic’s Claude for tasks like planning.

    What’s next for the iPhone in China

    Even with the iPhone 17 launch scheduled for next week, the AI-enhanced Siri is not expected to arrive immediately. Bloomberg reported the new features will roll out with iOS 26.4, which could be released as early as March next year.

    For Apple, the timing matters. The iPhone remains its most important product in China, where competition from local brands is intense and regulatory conditions are complex. A delay in eSIM readiness may slow adoption of its thinnest model to date, but the company’s long-term plans – including new form factors and AI-powered software – show that it is preparing for more than just one launch cycle.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post Apple’s thinnest iPhone may face delay in China over eSIM readiness appeared first on TechWire Asia.

    ]]>
    OpenAI weighs building one of India’s largest AI data centres https://techwireasia.com/2025/09/openai-weighs-building-one-of-indias-largest-ai-data-centres/ Tue, 02 Sep 2025 09:00:47 +0000 https://techwireasia.com/?p=243439 OpenAI considers a 1-gigawatt AI data centre in India, part of the Stargate programme. Project would extend AI infrastructure beyond the US. OpenAI is considering a plan to set up a massive new data centre in India, a project that could become one of its largest investments in Asia under the US government’s Stargate initiative. […]

    The post OpenAI weighs building one of India’s largest AI data centres appeared first on TechWire Asia.

    ]]>
  • OpenAI considers a 1-gigawatt AI data centre in India, part of the Stargate programme.
  • Project would extend AI infrastructure beyond the US.
  • OpenAI is considering a plan to set up a massive new data centre in India, a project that could become one of its largest investments in Asia under the US government’s Stargate initiative.

    As reported by Bloomberg, people familiar with the matter say the ChatGPT-maker is in talks with local partners about building a facility with at least a gigawatt of capacity. If completed, it would rank among the biggest in India, where Microsoft, Google, and billionaire Mukesh Ambani have already poured billions into similar sites to support cloud and AI growth.

    The exact location has not been finalised, and the timeline remains uncertain. Some sources suggest CEO Sam Altman may announce the project during an upcoming visit to India, though those plans are still shifting.

    OpenAI’s possible expansion in India comes as trade friction between Washington and New Delhi deepens. US President Donald Trump recently imposed a 50% tariff on Indian goods, saying it was a response to India’s trade barriers and purchases of Russian oil. The move disrupted years of US diplomacy aimed at strengthening ties with the South Asian nation. OpenAI declined to comment on whether its India plans might be affected by these tensions.

    Building AI infrastructure worldwide

    The company has embarked on an aggressive push to expand its infrastructure both in the US and overseas. The signature $500 billion Stargate project in the US is being developed with backing from SoftBank and Oracle. OpenAI recently raised its US capacity commitment by 4.5 gigawatts – a figure that rivals the energy demand of several million households. Trump has publicly praised the project, framing it as a win for US technology.

    Beyond its home market, OpenAI has also launched “OpenAI for Countries,” a global programme aimed at building AI infrastructure alongside governments that share what the US terms its own ‘democratic values.’ OpenAI has pitched the plan as a way for the US and its partners to guide AI development in a way that balances China’s growing influence.

    More than 30 countries have expressed interest in joining, with OpenAI pursuing about 10 partnerships to date. Confirmed projects include a facility in Norway that could grow to provide 520 megawatts capacity, and a five gigawatt complex in Abu Dhabi. In the latter case, OpenAI will use about 1 gigawatt of computing power itself, leaving the rest available for other customers.

    Security questions abroad

    The Abu Dhabi venture has triggered debate in Washington. Some officials argue such projects are necessary to counter China’s global AI ambitions, while others worry about security risks tied to shipping thousands of Nvidia chips to countries with historic links to Beijing.

    Since 2023, the US has required approval for any AI chip exports to the UAE. India, however, is not subject to those restrictions. The Trump administration dropped a plan recently that would have expanded AI chip export controls worldwide, leaving India in a stronger position to attract high-performance computing investments.

    Why India is key for OpenAI’s AI strategy

    For OpenAI, India offers more than just market size. A large-scale data centre in the country could allow it to train and deploy models locally, easing concerns about sending users’ data abroad. It would also align with New Delhi’s $1.2 billion IndiaAI Mission, which aims to develop large and small language models tailored to Indian languages and contexts. OpenAI has already pledged support for the initiative.

    India is also OpenAI’s second-largest market by users. To strengthen its presence, the company plans to open an office in New Delhi, expand its hiring, and has launched a $5 monthly subscription plan designed for local customers. The moves suggest India is becoming central to OpenAI’s AI strategy.

    If the data centre plan goes ahead, it would mark one of the company’s most ambitious steps yet in its global buildout – and a sign of how AI is becoming deeply tied to trade policy, energy infrastructure, and geopolitics.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post OpenAI weighs building one of India’s largest AI data centres appeared first on TechWire Asia.

    ]]>
    Microsoft debuts its first in-house AI models https://techwireasia.com/2025/08/microsoft-debuts-its-first-in-house-ai-models/ Fri, 29 Aug 2025 09:00:52 +0000 https://techwireasia.com/?p=243423 Microsoft’s first in-house AI models hint at independence from OpenAI. The two remain partners but are also turning into rivals. Microsoft has introduced its first in-house AI models, a move that could reshape its position in the AI race. The company rolled out MAI-Voice-1, a speech model, and MAI-1-preview, a text-based model it calls a […]

    The post Microsoft debuts its first in-house AI models appeared first on TechWire Asia.

    ]]>
  • Microsoft’s first in-house AI models hint at independence from OpenAI.
  • The two remain partners but are also turning into rivals.
  • Microsoft has introduced its first in-house AI models, a move that could reshape its position in the AI race. The company rolled out MAI-Voice-1, a speech model, and MAI-1-preview, a text-based model it calls a glimpse of what’s coming next inside Copilot.

    The MAI-Voice-1 model is built for speed. According to Microsoft, it can generate a full minute of audio in less than a second using just a single GPU. The model is already in use inside some of the company’s tools. For example, Copilot Daily uses it to deliver short news summaries through an AI voice host. It also helps produce podcast-style conversations that break down complex topics into easier explanations.

    The second release, MAI-1-preview, is designed for text tasks. Microsoft trained the model on roughly 15,000 Nvidia H100 GPUs, giving it the scale to handle instruction-following and natural Q&A. Users can already try it on Copilot Labs, where they can test its ability to respond to everyday queries. Microsoft says the model will soon support text-based use cases inside its Copilot assistant.

    Competition with OpenAI

    These launches come while Microsoft is still heavily tied to OpenAI, the maker of ChatGPT. Microsoft has invested more than $13 billion into the startup, which now has a valuation of about $500 billion. OpenAI continues to rely on Microsoft’s cloud infrastructure to run its own models, while Microsoft uses OpenAI’s systems inside Bing, Windows, and other products.

    At the same time, the two companies are drifting into competition. Last year, Microsoft added OpenAI to the list of rivals it names in its annual report, alongside Amazon, Apple, Google, and Meta. OpenAI has also been spreading its infrastructure needs across other providers such as CoreWeave, Google, and Oracle, as demand for ChatGPT climbs. The chatbot now draws about 700 million weekly users.

    Early results and rankings

    Performance comparisons show Microsoft’s work still trails some of its peers. On Thursday, the new MAI-1-preview ranked 13th for text workloads on LMArena, behind models from Anthropic, DeepSeek, Google, Mistral, OpenAI, and Elon Musk’s xAI. While not at the top, Microsoft has positioned MAI-1-preview as its first foundation model built entirely in-house.

    “MAI-1-preview represents our first foundation model trained end to end in house,” Mustafa Suleyman, Microsoft AI chief, wrote on X.

    A consumer focus

    Suleyman has been clear about the group’s direction. In an interview last year, he explained that Microsoft’s AI models are aimed at consumer use rather than the enterprise market. “My logic is that we have to create something that works extremely well for the consumer and really optimise for our use case,” he said. He pointed to Microsoft’s access to large amounts of consumer data—such as ad performance and telemetry—as a strength in training models for everyday companions.

    The company has also said that it does not plan to rely on one general-purpose model. Instead, it sees potential in offering multiple specialised models designed for different types of requests. “We believe that orchestrating a range of specialised models serving different user intents and use cases will unlock immense value,” Microsoft AI wrote in a blog post.

    Building an AI division

    MAI-1-preview builds on earlier small-scale models released under the Phi name. But this marks the first time Microsoft has trained a foundation model of this size from start to finish. The effort reflects how the company has been building out its AI group since hiring Suleyman and many of his former colleagues from the startup Inflection.

    Suleyman previously co-founded DeepMind, the research lab Google bought in 2014. In the past year, Microsoft has brought on about two dozen former DeepMind researchers to expand its internal team. The hires show how the company is drawing on talent with long experience in AI development to accelerate its own projects.

    For now, Microsoft is positioning its new models as additions to its Copilot ecosystem while it continues to rely on OpenAI for many core features. But the release of MAI-Voice-1 and MAI-1-preview signals a step toward more independence in model development. Analysts say it could also set up a new phase of competition between Microsoft and the company it helped make into an AI giant.

     

     

     

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post Microsoft debuts its first in-house AI models appeared first on TechWire Asia.

    ]]>
    The technical edge: How Verizon powers innovation in APAC https://techwireasia.com/2025/08/how-verizon-powers-innovation-in-apac/ Fri, 29 Aug 2025 01:00:23 +0000 https://techwireasia.com/?p=243325 In the age of AI and automation, networks are not just infrastructure, but an enabler of innovation. As businesses accelerate digital transformation, the need for agile, secure, and intelligent networks has never been greater. Verizon is meeting this demand with next-generation network solutions designed for demanding technologies, supporting AI deployment with scalable infrastructure, security, and […]

    The post The technical edge: How Verizon powers innovation in APAC appeared first on TechWire Asia.

    ]]>
    In the age of AI and automation, networks are not just infrastructure, but an enabler of innovation. As businesses accelerate digital transformation, the need for agile, secure, and intelligent networks has never been greater. Verizon is meeting this demand with next-generation network solutions designed for demanding technologies, supporting AI deployment with scalable infrastructure, security, and edge capabilities.

    Why legacy networks are holding innovation back

    Enterprise IT leaders face a complex and evolving digital landscape. Traditional networks can be rigid and fragmented, and struggle to support the real-time demands of AI, IoT, and data analytics. Legacy systems can delay the adoption of emerging technologies, creating bottlenecks that stifle innovation. When data flows in multiple systems, borders, and cloud environments without robust security controls, it increases exposure to security and compliance risks. Enterprises operating in the APAC region grapple with a lack of harmonisation in international regulatory requirements. The fragmented landscape makes it important to have a trusted security partner to help navigate compliance obligations without slowing innovation.

    The Verizon Deploying AI at Scale research report, produced in partnership with S&P Global, shows that early AI adopters often underestimated infrastructure demands, leading to delays in scaling initiatives. In a region where regulatory complexity compounds infrastructure challenges, organisations need a modern, flexible network approach to stay competitive; one that lets them innovate quickly and at scale while maintaining resilience, performance, and security.

    Verizon’s network solutions: Built for AI, edge and automation

    Verizon’s enterprise-grade network capabilities are built for the digital-first world. Offering a flexible architecture that supports hybrid environments, Verizon’s network solutions let businesses modernise infrastructure without disrupting operations. Whether integrating legacy systems or deploying next-gen applications, Verizon’s modular design enables phased upgrades that align with business priorities. Deploying AI at Scale finds modular networks are key as AI projects shift from pilots to multi-environment deployments requiring data movement between on-premises, edge, and cloud systems.

    Security needs to be embedded from core to edge. Verizon’s networks are built on zero-trust architecture, with features like data loss prevention (DLP) and intrusion detection systems (IDS) providing protection. Deploying AI at Scale [PDF] highlights security gaps in AI setups, where models like RAG may expose more data than intended. Control over network routing and data sovereignty strengthens compliance with regional regulations – an essential feature for APAC enterprises navigating the different legal frameworks.

    Performance and reliability are also important. Verizon uses real-time visibility and AI-driven analytics to optimise network performance and enable predictive maintenance. Its high-capacity private IP backbones and fibre connections are capable of handling hundreds of GB per second, and ensure AI workloads run efficiently in distributed environments, tackling the infrastructure concerns flagged in the study.

    Scalability for real-time technologies

    As AI, machine learning, and automation become central to business operations, networks must scale to support these technologies. Verizon’s infrastructure is designed to handle high-throughput, low-latency workloads, including video analytics, connected devices, and industrial automation. AI projects often fail due to cost issues, the Deploying AI at Scale research report notes, emphasising the need for scalable, efficient networks.

    Edge computing is another area where Verizon provides value to the modern enterprise network. By bringing compute power closer to where data is generated, edge solutions reduce latency and enable faster decision-making. This is particularly valuable in sectors like manufacturing, logistics, and healthcare, where real-time insights can drive operational efficiency and innovation. With only 2% of enterprises using edge AI, the report points to strong potential in sectors like healthcare and manufacturing, areas Verizon supports with its real-time, scalable solutions.

    Verizon also provides private 5G solutions that give businesses the flexibility to deploy secure, resilient networks and support applications. The private 5G and edge solutions are designed to move large AI workloads locally, enabling rapid model inferencing and retraining without routing traffic through distant cloud environments.

    Turning infrastructure into innovation

    For enterprises in APAC, Verizon’s solutions help innovation by removing the need to manage disparate systems, so organisations can focus on delivering value to customers. Businesses can launch AI and automation initiatives, confident that Verizon’s network solutions will accommodate the workloads. They can deploy edge computing to enable smarter operations and integrate cloud services securely and efficiently. Verizon’s hybrid-ready architecture simplifies AI workload orchestration in the cloud and on-prem by managing interconnectivity, bandwidth, and latency constraints, identified as key risks in the study.

    Verizon’s approach to scaling AI is grounded in strategic, ethical, and sustainable practices. The company outlines three key steps to enterprise AI success: setting high strategic standards, implementing a responsible AI framework, and embedding human oversight into AI systems. The principles align with Deploying AI at Scale‘s call for better communication between leadership and implementers, to reduce risks and enhance project success. The principles help businesses build trust, reduce risk and unlock long-term value from AI investments. For more information, visit 3 steps to scale enterprise AI.

    Why APAC enterprises choose Verizon

    Verizon’s strength lies in its ability to combine global scale with deep regional experience. With infrastructure and operations in Australia, Singapore, Japan and beyond, Verizon is well-positioned to support organisations navigating the region’s complex and fast-evolving digital landscape. Deploying AI at Scale highlights the fact that APAC’s diverse regulatory environment can present challenges for businesses seeking to implement and scale AI solutions. In such an environment, a trusted partner with proven experience in markets is essential to help enterprises stay compliant and competitive.

    “Our customers in APAC need networks that do more than connect – they need networks that are adaptive, secure and intelligent enough to support evolving AI, edge and hybrid environments,” said Rob Le Busque, Regional Vice President, Verizon Asia Pacific. “We work closely with enterprises in the region to deliver future-ready infrastructure that aligns with their digital transformation goals.”

    Verizon’s expertise spans highly-regulated sectors like finance, public services, and healthcare, where compliance and security are of paramount importance. It offers co-managed services and support to extend internal IT teams, along with strategic partnerships that accelerate implementation and reduce risk. Verizon’s services support edge AI by providing robust network segmentation, monitoring, and routing strategies that help enterprises deploy and maintain AI workloads in geographically-distributed environments.

    Security and compliance are built into every layer of Verizon’s architecture. From governance controls to region-specific data handling, Verizon helps businesses meet evolving APAC regulations with confidence. Its security framework addresses new AI threats like data leaks and prompt attacks – which are part of the expanded threat model outlined in Deploying AI at Scale.

    The network behind innovation

    In today’s innovation economy, your network isn’t just a utility – it’s your advantage. Verizon’s next-generation solutions lets APAC businesses harness AI and edge computing, providing the flexibility, security, and intelligence required to thrive with a competitive advantage.

    To discover how Verizon can help your business remove constraints and realise digital opportunities, visit https://www.verizon.com/business/en-au/solutions/adaptive-networks/

    The post The technical edge: How Verizon powers innovation in APAC appeared first on TechWire Asia.

    ]]>
    Nvidia faces China roadblocks despite soaring AI demand https://techwireasia.com/2025/08/nvidia-faces-china-roadblocks-despite-soaring-ai-demand/ Thu, 28 Aug 2025 10:00:15 +0000 https://techwireasia.com/?p=243408 Nvidia shares fell 3.2% after it left China sales out of its forecast amid regulatory doubts. A US$54B outlook wasn’t enough to satisfy investors expecting stronger growth. Nvidia shares slipped on Wednesday as uncertainty grew around its business in China, caught in the middle of the trade fight between Washington and Beijing. CEO Jensen Huang […]

    The post Nvidia faces China roadblocks despite soaring AI demand appeared first on TechWire Asia.

    ]]>
  • Nvidia shares fell 3.2% after it left China sales out of its forecast amid regulatory doubts.
  • A US$54B outlook wasn’t enough to satisfy investors expecting stronger growth.
  • Nvidia shares slipped on Wednesday as uncertainty grew around its business in China, caught in the middle of the trade fight between Washington and Beijing.

    CEO Jensen Huang said he expects approval to restart sales of Nvidia chips in China after striking a deal with US President Donald Trump to pay commissions to the government. But with no formal rules yet, and doubts about whether Chinese regulators might discourage purchases, Nvidia left potential China sales out of its forecast for the current quarter.

    That decision led to an outlook that looked steady but less than what investors have come to expect. Nvidia projected revenue of about US$54 billion for the third quarter, just above Wall Street estimates of US$53.14 billion, according to LSEG data. The forecast was enough to beat analyst targets but fell short of the “blowout growth the market has grown used to, pushing the stock down 3.2 per cent in after-hours trading. That drop cut about US$110 billion from Nvidia’s US$4.4 trillion valuation.

    As reported by Reuters, Huang downplayed concerns that the AI spending surge could be cooling, telling investors the opportunity could expand into a multi-trillion-dollar market over the next five years. “A new industrial revolution has started. The AI race is on,” he said, adding that Nvidia sees $3 trillion to $4 trillion in AI infrastructure spending by the end of the decade.

    “Nvidia’s biggest bottleneck isn’t silicon, it’s diplomacy, said Michael Ashley Schulman, chief investment officer at Running Point Capital. He added the company’s growth is “still impressive, but not as exponential.”

    Second-quarter revenue reached US$46.74 billion, above the US$46.06 billion analysts expected. But the data centre segment, a key driver of Nvidia’s growth, missed some estimates. Analysts suggested that big cloud providers may be spending more carefully. Nvidia said around half of its US$41 billion in data centre revenue came from major cloud companies, slightly below Visible Alpha’s estimates of US$41.42 billion.

    The company’s forecast also assumed no shipments of its H20 chips to China, even though some licenses to sell them have already been granted. Nvidia said that if geopolitical hurdles ease and orders come in, H20 sales to China could add between US$2 billion and US$5 billion in the third quarter.

    “That is a big question mark to watch, said Ben Bajarin, CEO of consulting firm Creative Strategies.

    Analysts also pointed out that Nvidia’s share price, which has risen by about one-third this year, may have created lofty expectations that are hard to meet. “The mega caps are the ones propelling a lot of the capex that Nvidia is benefiting from. But obviously Nvidia still is growing, is able to sell,” said Matt Orton of Raymond James Investment Management, who argued the durability of the AI trade remains intact.

    Even so, demand for Nvidia’s chips remains strong. Businesses racing to build generative AI systems continue to buy the company’s processors, which are designed to handle huge amounts of data quickly. CFO Colette Kress said Nvidia’s “sovereign AI” push — aimed at selling AI hardware and software to governments, including outside China — is on track to bring in US$20 billion this year. She added that cloud and enterprise customers could spend as much as US$600 billion on AI in 2025 alone, with total infrastructure spending tied to AI reaching US$3 trillion to US$4 trillion by the end of the decade.

    Huang said much of this growth will come from hyperscalers like Microsoft and Amazon, which are expected to spend about US$600 billion on data centres this year. He added that for a US$60 billion data centre, Nvidia can capture roughly US$35 billion in revenue.

    Big Tech firms including Meta and Microsoft are spending heavily on AI, much of it flowing toward Nvidia chips. For the current quarter, Nvidia forecast adjusted gross margins of 73.5 per cent, a touch above analyst estimates of 73.3 per cent.

    “The data centre results, while massive, showed hints that hyperscaler spending could tighten at the margins if near-term returns from AI applications remain difficult to quantify, said Jacob Bourne, an analyst at eMarketer.

    Shares of rival Advanced Micro Devices, which is developing competing AI servers, also fell 1.4 per cent after Nvidia’s results.

    AI enthusiasm, with Nvidia at the centre, has been one of the main drivers of the S&P 500’s rally over the past two years. But the company’s latest report drew a more muted response.

    “This is the smallest reaction to an earnings report in Nvidia’s AI incarnation, said Jake Behan, head of capital markets at Direxion in New York. “While it may not have been a blowout, it’s not a miss.”

    Outside China, Nvidia is still seeing strong demand for its H20 chips. Kress said one customer alone bought US$650 million worth during the second quarter.

    Huang also said the company’s high-end Blackwell chips are already largely booked through 2026, while its older Hopper processors remain in demand. “The buzz is: everything sold out,” Huang told analysts, describing the pace of orders.

    The company also said its board had approved an additional US$60 billion in share buybacks.

     

     

     

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post Nvidia faces China roadblocks despite soaring AI demand appeared first on TechWire Asia.

    ]]>
    Bangkok rises as Southeast Asia’s new data centre hub https://techwireasia.com/2025/08/bangkok-rises-as-southeast-asias-new-data-centre-hub/ Thu, 28 Aug 2025 08:00:23 +0000 https://techwireasia.com/?p=243385 Bangkok is Southeast Asia’s second-largest data centre market. Global operators shifting to large campuses in Bangkok and the EEC. Bangkok has quickly become one of Southeast Asia’s largest data centre hubs, according to a report from DC Byte. With IT capacity now above 2.5GW, it sits just behind Johor in regional capacity ranking. Growth is […]

    The post Bangkok rises as Southeast Asia’s new data centre hub appeared first on TechWire Asia.

    ]]>
  • Bangkok is Southeast Asia’s second-largest data centre market.
  • Global operators shifting to large campuses in Bangkok and the EEC.
  • Bangkok has quickly become one of Southeast Asia’s largest data centre hubs, according to a report from DC Byte. With IT capacity now above 2.5GW, it sits just behind Johor in regional capacity ranking. Growth is powered by specific advantages: available land, steady power supply, and a location that connects East and West. The factors are drawing in major operators and global cloud companies.

    Until recently, Thailand’s data centre market was made up of smaller, retail-style sites. Over the past two years, that has shifted toward full-scale campuses and large, multi-building projects. New development is also spreading outside central Bangkok into the Eastern Economic Corridor, especially in Chonburi, which is emerging as a strategic zone for hyperscale builds.

    Global players and big investments

    Amazon Web Services, Google, Microsoft, and Chinese providers including Huawei, Alibaba, and Tencent have committed to major projects in Thailand. Examples include Bytedance’s $8.8 billion investment, AWS’s $5 billion expansion, Google’s $1 billion site in Chonburi, and Microsoft’s first Thailand cloud region.

    Operators like STT GDC, Equinix, DAMAC Digital, and Evolution Data Centres are also expanding. Between 2019 and 2024, Bangkok’s total IT capacity grew more than twenty-fold, with pipeline capacity rising at an average of about 40% each year.

    At present, Bangkok has around 120MW of live capacity, with more expected before year-end as sites under construction come online. The next two years is expected to see another surge, with large projects from Google, DayOne, and Edgnex Data Centres by DAMAC among those set to launch.

    Eastern Economic Corridor becomes Thailand’s data centre hub

    The Eastern Economic Corridor (EEC) is fast becoming Thailand’s centre for large-scale deployments. Chonburi and Rayong are attractive for hyperscalers because of cheaper land, existing infrastructure, and easy access to ports and industrial zones.

    Chonburi is now home to some of the country’s biggest upcoming builds. Projects like DayOne’s 120MW Tech Park and Bridge Data Centres’ planned 200MW campus will increase local capacity. Demand is so strong that both local and global operators are already locking in space before builds are complete.

    One of the biggest announcements came in 2024, when Doma Infrastructure Group revealed plans for 1.5GW of green data centre campuses in the EEC. That accounted for most of the 1.7GW increase in early-stage capacity during the year, marking the shift from smaller facilities to large, multi-site campuses.

    Cloud and AI power Thailand’s data centre demand

    Cloud services remain the main source of growth, making up about 38% of Thailand’s total capacity in early 2025. AI is quickly catching up, rising from 20% of demand in 2024 to 28% just a year later. Growth is being fueled by AI training, large language models, and other data-heavy applications.

    To meet the need, operators are working with partners like Siam AI Corporation, an NVIDIA cloud partner, to design facilities built for high-density AI workloads. The efforts could help Thailand develop into both a regional cloud hub and a centre for AI innovation.

    Public and private efforts align

    Government support is also shaping the market. Partnerships with NVIDIA aim to build sovereign AI capabilities, while major operators like True IDC, Edgnex Data Centres by DAMAC, and GSA Data Center Company are teaming up with Siam AI Corporation on AI-focused cloud services.

    The mix of public and private investment is establishing the groundwork for next-generation infrastructure. With rising demand for AI and cloud services, Thailand is positioning itself to meet regional needs and strengthen its role in the wider Southeast Asian market.

     

     

     

    Want to learn more about Cloud Computing from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    CloudTech News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    The post Bangkok rises as Southeast Asia’s new data centre hub appeared first on TechWire Asia.

    ]]>
    The Chinese chip company that’s making Nvidia sweat: Inside Cambricon’s meteoric rise https://techwireasia.com/2025/08/cambricon-technologies-record-profit-china-ai-chip-revolution/ Wed, 27 Aug 2025 03:00:11 +0000 https://techwireasia.com/?p=243392 Cambricon Technologies posted a record 1.03 billion yuan profit in 1H25, marking a dramatic turnaround from previous losses as China’s domestic AI chip demand soars The Chinese AI chip giant’s stock hit 1,384.93 yuan on Monday, up 11.6%, bringing its market value close to overtaking luxury liquor maker Kweichow Moutai as China’s most expensive stock […]

    The post The Chinese chip company that’s making Nvidia sweat: Inside Cambricon’s meteoric rise appeared first on TechWire Asia.

    ]]>
  • Cambricon Technologies posted a record 1.03 billion yuan profit in 1H25, marking a dramatic turnaround from previous losses as China’s domestic AI chip demand soars
  • The Chinese AI chip giant’s stock hit 1,384.93 yuan on Monday, up 11.6%, bringing its market value close to overtaking luxury liquor maker Kweichow Moutai as China’s most expensive stock
  • Just three years ago, Cambricon Technologies was bleeding money, blacklisted by Washington, and fighting for survival in the shadow of Nvidia’s dominance. Today, the Chinese AI chipmaker has done something that seemed impossible: it turned a stunning US$144 million profit while US sanctions intended to cripple China’s tech ambitions appear to be backfiring spectacularly.

    Record-breaking financial performance

    The numbers tell the story of a company—and a country—defying expectations. Cambricon posted a 1.03 billion yuan profit versus a year-earlier loss of 533 million yuan, driven by a staggering 44-fold surge in revenue to 2.9 billion yuan for the first half of 2025. More than just a corporate comeback, this represents China’s most concrete proof yet that its domestic AI ecosystem can not only survive American restrictions—it can thrive because of them.

    Cambricon Technologies, which competes directly with Huawei Technologies in providing AI accelerators for developing and hosting AI models, has achieved what many analysts thought was impossible under US restrictions. The company’s earnings per share (EPS) of 2.48 yuan and a sustained profitability trajectory, which began with its first-ever quarterly profit in late 2024, signal a fundamental shift in China’s semiconductor capabilities.

    The transformation is all the more remarkable given the company’s recent struggles. Just months ago, Cambricon was grappling with years of persistent losses, while US sanctions severely limited its access to advanced manufacturing processes and cutting-edge technologies.

    Market euphoria and valuation surge

    The market’s response to Cambricon Technologies has been nothing short of euphoric. As of Aug 26, 2025, Cambricon Technologies was trading at 1,329.00 yuan, with a previous close of 1,384.93 yuan, representing an 11.6% jump on Monday alone. Cambricon’s stock closed up 11.4% on Monday at 1,384.93 yuan ($191.07) per share, just shy of the fiery liquor-maker Kweichow Moutai, which closed at 1,490.33 yuan.

    Investors are paying an extraordinary premium for a piece of China’s AI future—Cambricon’s price-to-earnings ratio has ballooned to 4,463 times, making Kweichow Moutai’s 20 times multiple look conservative by comparison. The rally shows no signs of slowing: after rising 383% in 2024 to become China’s best-performing stock, shares have more than doubled since mid-July, delivering a spectacular 562% return since September.

    The DeepSeek effect and China’s AI renaissance

    Behind Cambricon’s meteoric rise lies a game-changing development: DeepSeek, the Chinese AI startup that shocked Silicon Valley with its cost-effective approach to artificial intelligence. When DeepSeek revealed it could achieve a “theoretical” profit margin of 545%—more than five times its costs—it didn’t just demonstrate Chinese AI prowess, it created a gold rush for the domestic chips powering these breakthroughs.

    The ripple effect was immediate. As DeepSeek optimised its models for the “next generation of domestic chips,” investors suddenly grasped the full potential of China’s homegrown AI ecosystem. Beijing’s push for technological self-reliance wasn’t just about politics anymore—it was about profits, and companies like Cambricon were perfectly positioned to capitalise

    When sanctions backfire

    The irony is impossible to ignore: the very restrictions designed to kneecap China’s AI ambitions have become Cambricon’s greatest competitive advantage. When Washington added the company to its Entity List in December 2022, cutting off access to advanced US technologies, it seemed like a death sentence. Instead, it became a business opportunity.

    Nvidia’s China-specific H20 chips—already a watered-down version designed to comply with export controls—were further restricted under the Trump administration’s latest regulations. The result? Chinese companies had no choice but to look inward, and Cambricon was ready with domestic alternatives. 

    In September 2024, when Beijing ramped up pressure on local firms to ditch American processors, Cambricon’s shares hit the 20% daily trading limit.

    The technical reality check

    But can Cambricon actually compete with Nvidia’s technological prowess? The company, founded by brothers Chen Yunji and Chen Tianshi from China’s elite “genius youth class,” is betting its future on the Siyuan 690 processor—a chip designed to rival Nvidia’s H100. 

    While specifications remain closely guarded, the China Academy of Information and Communications Technology has validated Cambricon as one of eight suppliers with DeepSeek-compatible hardware.

    The real test isn’t just raw performance, but ecosystem compatibility. Chinese AI companies increasingly need chips optimised for their specific algorithms and cost structures—something Nvidia’s export-restricted chips struggle to deliver.

    The $5 billion bet

    Cambricon isn’t just riding the wave—it’s doubling down with a massive 5 billion yuan capital raise to fund large language model chip development. The allocation tells the story: 2.9 billion yuan for LLM chips, 1.6 billion yuan for software, and the rest for working capital. This isn’t incremental improvement; it’s an attempt to leapfrog generations of chip development.

    Goldman Sachs’ bullish 1,835 yuan price target (50% above current levels) reflects growing confidence that Chinese cloud giants like Tencent will fuel sustained demand. But with a 4,463x P/E ratio, there’s little room for execution missteps.

    The uncomfortable truth

    The sustainability question looms large, and it’s not just about valuation bubbles. Cambricon’s explosive growth masks dangerous client concentration risks—a few large customers departing could crater revenues overnight. More fundamentally, while China can design competitive AI chips, manufacturing them at scale without access to cutting-edge Western equipment remains an open question.

    The company likely relies on domestic foundries like SMIC or Hua Hong Semiconductor, which lag TSMC by several process generations. Physics doesn’t care about geopolitics, and advanced AI workloads demand the most efficient chips available.

    The new semiconductor reality

    Cambricon’s remarkable turnaround isn’t just a corporate success story—it’s a preview of the technology cold war’s next phase. For decades, the semiconductor industry thrived on global integration: American designs, Taiwanese manufacturing, Chinese assembly. That era is ending.

    What we’re witnessing isn’t just market fragmentation, but the birth of parallel technological universes. Chinese AI companies will increasingly optimise for domestic chips, while American firms double down on Western hardware. The result won’t be healthy competition—it will be technological tribalism that ultimately slows innovation for everyone.

    The real winners may not be the companies or countries involved, but the geopolitical rivals watching from the sidelines. While the US and China spend hundreds of billions building duplicate semiconductor ecosystems, Europe, India, and others are quietly developing their own capabilities without the baggage of a tech cold war.

    Overall, Cambricon’s success proves that China can build a domestically powered AI ecosystem. But success and optimality are different things. The world is about to find out how much innovation we’re willing to sacrifice on the altar of technological sovereignty. Based on Cambricon’s soaring stock price, investors think the answer is: quite a lot.

    The question isn’t whether China can build its own AI chip champions—Cambricon has already answered that. The question is whether a bifurcated global technology system will ultimately serve anyone’s interests, including China’s. On that, the jury is very much still out.

    The post The Chinese chip company that’s making Nvidia sweat: Inside Cambricon’s meteoric rise appeared first on TechWire Asia.

    ]]>
    Apple holds talks with Google about using Gemini for Siri https://techwireasia.com/2025/08/apple-holds-talks-with-google-about-using-gemini-for-siri/ Mon, 25 Aug 2025 11:00:04 +0000 https://techwireasia.com/?p=243380 Apple is in talks with Google to use Gemini for Siri. The move could reshape Apple’s AI tools and ties with Google. Apple is exploring a major change for Siri that could bring Google’s Gemini AI models into the iPhone. According to Bloomberg, people familiar with the talks say the two companies have held early […]

    The post Apple holds talks with Google about using Gemini for Siri appeared first on TechWire Asia.

    ]]>
  • Apple is in talks with Google to use Gemini for Siri.
  • The move could reshape Apple’s AI tools and ties with Google.
  • Apple is exploring a major change for Siri that could bring Google’s Gemini AI models into the iPhone. According to Bloomberg, people familiar with the talks say the two companies have held early discussions about building a custom system to run on Apple’s servers, though no deal is finalised.

    The move would mark a shift for Apple, which has long built its digital assistant around its own software. Facing delays in its internal work on generative AI, Apple is considering whether to stick with its homegrown technology or partner with outside firms. The company has also spoken with Anthropic and OpenAI about potential integrations.

    A push to catch up in AI

    Apple came late to the generative AI race and has struggled to close the gap with rivals. A long-promised Siri update — one that could use personal data to fulfil complex commands and allow users to control devices entirely by voice — was supposed to launch last spring. Engineering problems forced Apple to postpone the rollout by a year.

    [See also: Google Cloud expands AI security tools at 2025 Summit]

    That failure reshaped how the company manages the project. Siri development was taken away from AI chief John Giannandrea and placed under Craig Federighi, who oversees software, and Mike Rockwell, known for his work on the Vision Pro headset. Together with Adrian Perica, who runs Apple’s corporate development team, they began weighing outside help.

    At the same time, Apple is running a “bake-off.” One version of the new Siri, code-named Linwood, uses Apple’s own models. Another, Glenwood, relies on third-party systems.

    Apple weighs Google’s Gemini for Siri

    While Apple considers multiple options, Google’s Gemini stands out. The model is already being trained to work with Apple’s servers, according to people with knowledge of the talks.

    The two companies are rivals in smartphones and operating systems, but they already cooperate in search. Google pays billions each year to be the default search engine on Apple devices — a deal now under antitrust scrutiny. A Siri partnership would extend that uneasy alliance.

    Shares of both companies climbed after news of the discussions surfaced. Google rose nearly 3% in New York trading, while Apple gained more than 1%.

    A broader AI strategy

    Even if Apple brings in Gemini, the company isn’t walking away from its own AI research. Its Foundation Models team is developing new systems, including its first trillion-parameter model. For now, that work is limited to research rather than consumer use.

    Apple has also begun folding third-party tools into its devices. ChatGPT is available in iOS 26 for image generation, and the company has scrapped a project to build its own AI coding assistant in favour of using ChatGPT and Anthropic’s Claude.

    Still, Apple has tried to keep core AI features — such as Apple Intelligence tools for summarising text or creating custom emoji — under its own control to protect user privacy. If a Siri deal goes forward, any third-party models would run on Apple’s Private Cloud Compute servers, not directly on devices.

    Pressure inside Apple

    Inside Apple, the uncertainty has created strain. Several members of the Foundation Models team left this summer after chief architect Ruoming Pang departed for Meta, lured by a lucrative offer to join its Superintelligence Labs. Others have started looking for jobs elsewhere.

    [See also: iOS 26 vs Android 16: Apple and Google take smartphones in different directions]

    Some Apple managers have even floated replacing AI models used beyond Siri, though that idea is not currently in development.

    Meanwhile, Apple’s leadership is signalling urgency. At an internal meeting, CEO Tim Cook told employees the company must win in AI and is increasing its investment. On a recent earnings call, he declined to say whether Apple would use third-party models, but his refusal to rule it out suggested the company is seriously considering the option.

    What’s next for Apple and Siri

    The talks with Google remain preliminary, with no commercial terms in place. But Apple’s willingness to consider outside help shows how far it has fallen behind in AI — and how determined it is to catch up.

    For now, Apple is testing whether its own models or those from partners like Google, Anthropic, or OpenAI can finally deliver the smarter Siri it has long promised.

    The post Apple holds talks with Google about using Gemini for Siri appeared first on TechWire Asia.

    ]]>