Google Unveils Workspace Intelligence and Next-Gen TPU Chips at Cloud Next 2026 to Boost AI and Productivity

Google Workspace Unveils ‘Workspace Intelligence’ and TPU 8t + 8i Chips at Cloud Next 2026

At Cloud Next 2026, Google introduced Workspace Intelligence, a system designed to provide highly accurate, personalized context across all Workspace applications. This innovation leverages Google’s advanced search capabilities and the Gemini reasoning engine to enhance user productivity.

Key Features of Workspace Intelligence:

– Information Gathering: The system autonomously collects relevant information, breaking down contextual barriers to ensure users have immediate access to necessary data.

– Situational Awareness: Utilizing advanced reasoning, Workspace Intelligence identifies and prioritizes critical tasks, ensuring users never overlook important action items.

– Personalization: By analyzing past work and communication patterns, the system adapts to individual work styles, voices, and formatting preferences, delivering outputs that resonate authentically with each user.

This intelligence layer integrates seamlessly with features like AI Inbox and AI Overviews in Gmail. Notably, the Ask Gemini feature in Google Chat acts as a unified command line, allowing users to state goals and receive completed results directly within the chat interface. Tasks such as document and slide generation, file searches based on descriptions, and scheduling meetings are streamlined through this feature.

In Google Docs, Workspace Intelligence enables the creation of infographics grounded in business data and facilitates simultaneous image editing to maintain visual consistency. It also assists in triaging and responding to comments, as well as editing documents based on feedback.

In Google Slides, the system ensures generated slide decks adhere to company templates and visual styles. In Google Sheets, it supports conversational building and editing of spreadsheets, transforming ideas into professionally formatted drafts that align with individual voices, brands, styles, and company templates.

Additionally, Google announced the eighth generation of Tensor Processing Units (TPUs), introducing two distinct architectures: TPU 8t for training and TPU 8i for inference.

TPU 8t Highlights:

– Massive Scale: A single TPU 8t superpod scales to 9,600 chips and two petabytes of shared high-bandwidth memory, delivering 121 ExaFlops of compute.

– Maximum Utilization: Integrating 10x faster storage access and TPUDirect, TPU 8t ensures optimal system utilization.

– Near-Linear Scaling: The new Virgo Network, combined with JAX and Pathways software, provides near-linear scaling for up to a million chips in a single logical cluster.

TPU 8i Highlights:

– Breaking the Memory Wall: TPU 8i pairs 288 GB of high-bandwidth memory with 384 MB of on-chip SRAM, keeping a model’s active working set entirely on-chip.

– Axion-Powered Efficiency: Doubling the physical CPU hosts per server and utilizing custom Axion Arm-based CPUs, TPU 8i is optimized for superior performance.

– Scaling MoE Models: Doubling the Interconnect bandwidth to 19.2 Tb/s and introducing the Boardfly architecture reduces network diameter by over 50%, ensuring cohesive, low-latency operation.

– Eliminating Lag: The on-chip Collectives Acceleration Engine (CAE) offloads global operations, reducing on-chip latency by up to 5x.

These advancements underscore Google’s commitment to enhancing AI capabilities and user productivity within the Workspace ecosystem.