Data center cooling innovations
Edition #209 | 31 October 2025
Switching social media management tools doesn’t have to be a headache. See how teams move to Agorapulse without the chaos in this quick live product tour.
Join our live 30-minute product tour and see how to simplify social media planning, publishing, reporting, and listening in one platform. Learn why teams are leaving outdated tools with shifting prices for a calmer, clearer workflow with Agorapulse.
Save Your Seat →
Tsinghua University Achieves 12.5 GHz Processing Speed with Light-Based OFE2 Chip
Plus: Microsoft tests microfluidic cooling achieving 3x better heat removal in AI data centers, OpenAI-AMD seal multi-billion deal for 6-gigawatt infrastructure deployment, Google unveils autonomous web-browsing capabilities in Gemini 2.5
Today’s Quick Wins
What happened: Researchers at Tsinghua University unveiled the Optical Feature Extraction Engine (OFE2), a photonic computing system that processes AI workloads at 12.5 GHz with sub-251 picosecond latency using light instead of electricity. The system demonstrated improved accuracy in medical imaging segmentation and enabled profitable high-frequency trading decisions with near-zero latency.
Why it matters: Traditional electronic processors are hitting physical limits as AI workloads demand exponentially more throughput. By moving computation from electrons to photons, OFE2 achieves processing speeds nearly a million times faster while generating significantly less heat, opening pathways to real-time AI applications previously constrained by latency.
The takeaway: Optical computing just moved from research labs to practical deployment—teams building latency-sensitive applications should start exploring hybrid architectures that offload feature extraction to photonic preprocessors.
When Photons Replace Electrons: Inside the 250-Picosecond Revolution
The bottleneck in modern AI isn’t the algorithms—it’s the hardware waiting for electrons to shuffle through silicon. That changed this week.
The Problem: High-speed AI applications like robotic surgery and real-time financial trading require extracting key features from massive data streams in microseconds, but traditional digital processors struggle with the physical limits of electron-based computation that cause increasing latency and heat generation.
The Solution: Professor Hongwei Chen’s team engineered a fully integrated optical computing system that performs matrix-vector multiplication—the fundamental operation in neural networks—using coherent light waves.
Integrated Data Preparation Module: The system uses tunable on-chip power splitters and precision delay lines to deserialize input signals into multiple synchronized parallel optical channels, solving the phase perturbation problem that plagued fiber-based approaches.
Diffraction-Based Computing: Optical waves pass through a diffraction operator that mathematically models matrix-vector multiplication by focusing light into bright spots that deflect based on input signal variations, enabling feature extraction at light speed.
Reconfigurable Architecture: An adjustable integrated phase array allows OFE2 to be dynamically reconfigured for different computational tasks without hardware changes.
The Results Speak for Themselves:
Baseline: Traditional electronic feature extraction with millisecond-range latency
After Optimization: Single matrix-vector multiplication completed in under 250.5 picoseconds at 12.5 GHz—the shortest latency among similar optical implementations
Business Impact: Medical imaging networks using OFE2 required fewer electronic parameters than baseline AI models while achieving increased pixel accuracy in CT scan organ identification; trading systems converted live market data directly into profitable buy/sell decisions with near-zero execution delay
What We’re Testing This Week
Practical technical topic: Agentic AI Workflows in Commerce Analytics
Data professionals: your analytics work rarely ends at “insight delivered”—the next frontier is “action triggered and executed.”
RAG + Agent Chains:
Build a retrieval-augmented generation (RAG) pipeline that takes structured product/consumer metadata, passes it through an LLM agent chain (discovery → decision → transaction) and outputs an API call to payment or order system.
Tip: Measure conversion lift (transactions per 1000 sessions) and fraud rate drop comparing traditional funnel vs agentic funnel.Metadata Enrichment + GEO:
Enrich your product catalogue with structured metadata, descriptive embeddings and link to consumer interest profiles to optimise for model-driven discovery (GEO). Then test if agent prompts return higher relevance leads.
Tip: Track click-throughs, agent-recommendation conversion, and feedback loop quality (e.g., abandoned agent flows).
Recommended Tools
Apache Spark 4.0 with Photon Engine - Native GPU acceleration for DataFrame operations with 5-8x speedups on aggregation workloads.
LangSmith Observability Suite - End-to-end tracing for LLM applications with token-level cost tracking and latency heatmaps.
DuckDB 1.2 with Arrow Integration In-process analytics with zero-copy data exchange and 40GB/s scan speeds on compressed Parquet.
Quick Poll
Lightning Round
3 Things to Know Before Signing Off
• PwC & Stripe’s collaboration showed 11.9% revenue lift in agentic commerce workflows.
Read More
• Gartner’s “Hype Cycle for Sales Transformation, 2025” highlights AI agents for sales are now in the Peak of Inflated Expectations.
Read More
• NVIDIA and Nokia announced a US$1 B investment/partnership to build AI-native RAN for 6G, showing AI’s reach beyond traditional IT into telecom infrastructure. Read More
Follow Us:
LinkedIn | X (formerly Twitter) | Facebook | Instagram
Please like this edition and put up your thoughts in the comments.




Regarding the topic of the article, the shift to optical computing is undoubtedly a critical advancement for AI infrastructure and a more sustainble future.