Researchers at Tsinghua University and Z.ai built IndexCache to eliminate redundant computation in sparse attention models ...
Within 24 hours of the release, community members began porting the algorithm to popular local AI libraries like MLX for ...
Google Research recently revealed TurboQuant, a compression algorithm that reduces the memory footprint of large language ...
Google unveils TurboQuant, PolarQuant and more to cut LLM/vector search memory use, pressuring MU, WDC, STX & SNDK.
Google's TurboQuant algorithm compresses LLM key-value caches to 3 bits with no accuracy loss. Memory stocks fell within ...
The biggest memory burden for LLMs is the key-value cache, which stores conversational context as users interact with AI ...
That much was clear in 2025, when we first saw China's DeepSeek — a slimmer, lighter LLM that required way less data center ...
Google has published TurboQuant, a KV cache compression algorithm that cuts LLM memory usage by 6x with zero accuracy loss, ...