Introducing Lightning Transformers, a new library that seamlessly integrates PyTorch Lightning, HuggingFace Transformers and Hydra, to scale up deep learning research across multiple modalities.


Machine learning metrics making evaluations of distributed PyTorch models clean and simple.

Figuring out which metrics you need to evaluate is key to deep learning. There are various metrics that we can evaluate the performance of ML algorithms. TorchMetrics is a collection of PyTorch metric implementations, originally a part of the PyTorch Lightning framework for high-performance deep learning. In this article, we will go over how you can use TorchMetrics to evaluate your deep learning models and even create your own metric with a simple to use API.

What is TorchMetrics?

TorchMetrics is an open-source PyTorch native collection of functional and module-wise metrics for simple performance evaluations. You can use out-of-the-box implementations for common metrics…


8 New Flash Tasks

Lightning Flash is a library from the creators of PyTorch Lightning to enable quick baselining and experimentation with state-of-the-art models for popular Deep Learning tasks.

We are excited to announce the release of Flash v0.3 which has been primarily focused on the design of a modular API to make it easier for developers to contribute and expand tasks.

In addition to that, we have included 8 new tasks across the Computer Vision and NLP domains, visualization tools to help with debugging, and an API to facilitate the use of existing pre-trained state-of-the-art Deep Learning models.

New Out-of-the-Box Flash Tasks

Flash now includes 10 tasks…


PyTorch profiler integration, predict and validate trainer steps, and more

Today we are excited to announce Lightning 1.3, containing highly anticipated new features including a new Lightning CLI, improved TPU support, integrations such as PyTorch profiler, new early stopping strategies, predict and validate trainer routines, and more.

In addition, we are standardizing our release schedule. We will be launching a new minor release (1.X.0) every quarter, where we will build new features for 8–10 weeks, and then freeze new additions (except bug fixes) for 2 weeks prior to each minor release. Between these launches will continue to maintain weekly bug fixes releases, as we do now.

Overview of New PyTorch Lightning 1.3 Features

New Early Stopping Strategies


New release includes a full set of metrics for information retrieval and other metrics requested by the community

This post was co-written by Nicki Skafte Detlefsen and Luca Di Liello

TorchMetrics v0.3.0 includes 6 new metrics for evaluating information retrieval

We are happy to announce TorchMetrics v0.3.0 is now publicly available. It brings some general improvements to the library, the most prominent new feature is a set of metrics for information retrieval.

Information Retrieval

Information retrieval (IR) metrics are used to evaluate how well a system is retrieving information from a database or from a collection of documents. …


Max parameter size on using the same MinGPT model on the same lambda-labs A100 server with and without DeepSpeed with less than 3 lines of code difference

TLDR; This post introduces the PyTorch Lightning and DeepSpeed integration demonstrating how to scale models to billions of parameters with just a few lines of code.

What is PyTorch Lightning?


New release including many new PyTorch integrations, DeepSpeed model parallelism, and more.

We are happy to announce PyTorch Lightning V1.2.0 is now publicly available. It is packed with new integrations for anticipated features such as:

Continue reading to learn more about what’s available. As always, feel free to reach out on Slack or discussions for any questions you might have or issues you are facing.

PyTorch Profiler [BETA]

PyTorch Autograd provides a profiler that lets you inspect the cost of different operations

inside your model — both on the CPU and GPU (read more about the profiler in the PyTorch…


Flash is a collection of tasks for fast prototyping, baselining and fine-tuning scalable Deep Learning models, built on PyTorch Lightning.

Whether you are new to deep learning, or an experienced researcher, Flash offers a seamless experience from baseline experiments to state-of-the-art research. It allows you to build models without being overwhelmed by all the details, and then seamlessly override and experiment with Lightning for full flexibility. Continue reading to learn how to use Flash tasks to get state-of-the-art results in a flash.

Why Flash?

1. The power of lightning, without the prerequisites

Over the past year, PyTorch Lightning has received an enthusiastic response from the community for decoupling research from…


Image by Author

Lightning 1.1 is now available with some exciting new features. Since the launch of V1.0.0 stable release, we have hit some incredible milestones- 10K GitHub stars, 350 contributors, and many new members in our slack community! A few highlights include:

  • Sharded model training- save up to 55% of memory without losing speed
  • Sequential Model Parallelism
  • Automatic logging for callbacks and any LightningModule hook*.
  • Lightning Bolts 0.2.6 release

Sharded model training [BETA]

We're thrilled to introduce the beta version of our new sharded model training plugin, in collaboration with FairScale by Facebook. Sharded Training utilizes Data-Parallel Training under the hood, but optimizer states and gradients…


Lightning reveals the final API, a new website, and a sneak peek into our new native platform for training models at scale on the cloud.

We were hard at work in the last couple of months fine-tuning our API, polishing our docs, recording tutorials, and it’s finally time to share with you all V1.0.0 of PyTorch Lightning. Want the lightning answer to scaling models on the cloud? continue reading.

The Lightning DNA

AI research has evolved much faster than any single framework can keep up with. The field of deep learning is constantly evolving, mostly in complexity and scale. …

PyTorch Lightning team

PyTorch Lightning is a deep learning research frameworks to run complex models without the boilerplate.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store