Tag: GPU Systems
Posts tagged with GPU Systems.
4.5 times faster Hugging Face transformer inference by modifying some Python AST
Posted on:December 29, 2021Recently, 🤗 Hugging Face people have released a commercial product called Infinity to perform inference with very high performance (aka very fast compared to Pytorch + FastAPI deployment). Unfortunately it’s a paid p…
Divide Hugging Face Transformers training time by 2 or more with dynamic padding and uniform length batching
Posted on:May 20, 2020Reducing training time helps to iterate more in a fixed budget time and thus achieve better results.
Optimization of Hugging Face Transformer models to get Inference < 1 Millisecond Latency + deployment on production ready inference server
Posted on:November 5, 2021Hi, I just released a project showing how to optimize big NLP models and deploy them on Nvidia Triton inference server.
Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
Posted on:November 24, 2021We just launched a new open source Python library to help in optimizing Transformer model inference and prepare deployment in production. It’s a follow up of a proof of concept shared . Scripts have been conve…
FlashAttention: paper vs. Triton
Posted on:September 6, 2022A quick note on the loop-order mismatch between the FlashAttention paper and common Triton-style kernels, and why making ownership explicit avoids races on O.
OpenAI cites Kernl in Triton slides
Posted on:December 12, 2022A short note to say how pleasant it is to see our work on Kernl cited in an OpenAI Research Acceleration Team slide deck about Triton. Thank you to the team for the nod and for building such an empowering tool.
Meeting Michael Lightstone, VP of AI Computing at NVIDIA
Posted on:November 16, 2022A short, amused note on Kernl’s unexpected visibility and a chat with NVIDIA’s Michael Lightstone-yes, from a legal publisher.
Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
Posted on:October 26, 2022We are releasing **** under Apache 2 license, a library to make PyTorch models inference significantly faster. With 1 line of code we applied the optimizations and made Bert up to 12X faster than Hugging Face baseline…
What we learned by benchmarking TorchDynamo (PyTorch team), ONNX Runtime and TensorRT on transformers model (inference)
Posted on:August 3, 2022TL;DR: (prototype from PyTorch team) plus (from Nvidia) backend makes Bert (the tool is model agnostic) inference on PyTorch > 3X faster most of the time (it depends on input shape) by just adding a single lin…
What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Posted on:May 24, 2022We made autoregressive based models like 2X faster than 🤗 Hugging Face Pytorch with 3 simple tricks:
Hugging Face Transformer Inference Under 1 Millisecond Latency
Posted on:November 5, 2021Go to production with Microsoft and Nvidia open source tooling
Deep Dive into Kernel Fusion: Accelerating Inference in Llama V2
Posted on:July 20, 2023The code is available at . Llama, the most widely discussed machine learning model in 2023, has recently received an upgrade with the release of Llama V2. Its new licensing terms have sparked significant excitement…
Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
Posted on:February 9, 2023We are happy to announce the support of OpenAI Whisper model (ASR task) on Kernl. We focused on high quality transcription in a latency sensitive scenario, meaning: whisper-large-v2 weights _beam search 5 (as recomm…
Upstreamed: Kernl’s Triton “debugger” lands in OpenAI Triton
Posted on:October 12, 2023In May 2023 we upstreamed our Python-level interpreter/debugger for Triton kernels to the OpenAI Triton project; here’s what it is, how to use it, and where it helps.