8 min read

Toying around with Hugging Face transformers and PDPC decisions

The default hugging face transformers are given a run on three samples of PDPC decisions on NER, summarisation and QA.
Toying around with Hugging Face transformers and PDPC decisions

For most natural language processing projects, the steps are quite straightforward. First, get a data set. Next, train a model for the task you want and fine-tune it. Once you’re happy, deploy and demo the product! In recent years, more and more powerful transformers like BERT and GPT-3 might make these steps more manageable, but you’re still doing the same things.

However, many libraries have default settings which allow you to play with their capabilities quickly. I did this once with spaCy, and while it was not perfect, it was still fascinating.

Today’s experiment is with HuggingFace’s transformers, the most exciting place on the internet for natural language processing. It’s a great place to experience cutting edge NLP models like BERT, ROBERTA and GPT-2. It’s also open source! (Read: free)

I will play with my own data and hopefully also provide an introduction to various NLP tasks available.

This post is for subscribers only