Introducing Lightning Transformers, a new library that seamlessly integrates PyTorch Lightning, HuggingFace Transformers and Hydra, to scale up deep learning research across multiple modalities.

Machine learning metrics making evaluations of distributed PyTorch models clean and simple.

Figuring out which metrics you need to evaluate is key to deep learning. There are various metrics that we can evaluate the performance of ML algorithms. TorchMetrics is a collection of PyTorch metric implementations, originally a part of the PyTorch Lightning framework for high-performance deep learning. This article will go over how you can use TorchMetrics to evaluate your deep learning models and even create your own metric with a simple to use API.

What is TorchMetrics?

TorchMetrics is an open-source PyTorch native collection of functional and module-wise metrics for simple performance evaluations. You can use out-of-the-box implementations for common metrics such as…

Lightning 1.4 Release adds TPU pods, IPU Hardware, DeepSpeed Infinity, Fully Sharded Data-Parallel and More.

Today we are excited to announce Lightning 1.4, introducing support for TPU pods, XLA profiling, IPUs, and new plugins to reach 10+ billion parameters, including Deep Speed Infinity, Fully Sharded Data-Parallel and more!

TPU Pod Training

A guide to open-source tools for efficient dataset and model development and analysis for video understanding with FiftyOne, PyTorch Lightning, and PyTorch Video

A visualization of Lightning Flash and PyTorchVideo predictions in FiftyOne (Image by author)

Video understanding, while a widely popular and ever-growing field of computer vision, is often held back by the lack of video support in many tools. Hundreds of tools exist to expedite nearly all aspects of the computer vision lifecycle, but they generally only support image data. In recent months, open-source tools have begun to tackle the tooling issues for video-based computer vision.

Recent collaborations between these open-source tools are making it easier than ever to execute your video workflows.


PyTorchVideo is a deep learning library with a focus on video understanding work. PytorchVideo provides reusable, modular, and efficient components needed…

8 New Flash Tasks

Lightning Flash is a library from the creators of PyTorch Lightning to enable quick baselining and experimentation with state-of-the-art models for popular Deep Learning tasks.

We are excited to announce the release of Flash v0.3 which has been primarily focused on the design of a modular API to make it easier for developers to contribute and expand tasks.

In addition to that, we have included 8 new tasks across the Computer Vision and NLP domains, visualization tools to help with debugging, and an API to facilitate the use of existing pre-trained state-of-the-art Deep Learning models.

New Out-of-the-Box Flash Tasks

Flash now includes 10 tasks…

PyTorch profiler integration, predict and validate trainer steps, and more

Today we are excited to announce Lightning 1.3, containing highly anticipated new features including a new Lightning CLI, improved TPU support, integrations such as PyTorch profiler, new early stopping strategies, predict and validate trainer routines, and more.

In addition, we are standardizing our release schedule. We will be launching a new minor release (1.X.0) every quarter, where we will build new features for 8–10 weeks, and then freeze new additions (except bug fixes) for 2 weeks prior to each minor release. Between these launches will continue to maintain weekly bug fixes releases, as we do now.

Overview of New PyTorch Lightning 1.3 Features

New Early Stopping Strategies

New release includes a full set of metrics for information retrieval and other metrics requested by the community

This post was co-written by Nicki Skafte Detlefsen and Luca Di Liello

TorchMetrics v0.3.0 includes 6 new metrics for evaluating information retrieval

We are happy to announce TorchMetrics v0.3.0 is now publicly available. It brings some general improvements to the library, the most prominent new feature is a set of metrics for information retrieval.

Information Retrieval

Information retrieval (IR) metrics are used to evaluate how well a system is retrieving information from a database or from a collection of documents. …

Max parameter size on using the same MinGPT model on the same lambda-labs A100 server with and without DeepSpeed with less than 3 lines of code difference

TLDR; This post introduces the PyTorch Lightning and DeepSpeed integration demonstrating how to scale models to billions of parameters with just a few lines of code.

What is PyTorch Lightning?

New release including many new PyTorch integrations, DeepSpeed model parallelism, and more.

We are happy to announce PyTorch Lightning V1.2.0 is now publicly available. It is packed with new integrations for anticipated features such as:

Continue reading to learn more about what’s available. As always, feel free to reach out on Slack or discussions for any questions you might have or issues you are facing.

PyTorch Profiler [BETA]

PyTorch Autograd provides a profiler that lets you inspect the cost of different operations

inside your model — both on the CPU and GPU (read more about the profiler in the PyTorch…

Flash is a collection of tasks for fast prototyping, baselining and fine-tuning scalable Deep Learning models, built on PyTorch Lightning.

Whether you are new to deep learning, or an experienced researcher, Flash offers a seamless experience from baseline experiments to state-of-the-art research. It allows you to build models without being overwhelmed by all the details, and then seamlessly override and experiment with Lightning for full flexibility. Continue reading to learn how to use Flash tasks to get state-of-the-art results in a flash.

Why Flash?

1. The power of lightning, without the prerequisites

Over the past year, PyTorch Lightning has received an enthusiastic response from the community for decoupling research from…

PyTorch Lightning team

PyTorch Lightning is a deep learning research frameworks to run complex models without the boilerplate.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store