Skip to content

Latest commit

 

History

History
105 lines (78 loc) · 5.72 KB

huggingface_readme.md

File metadata and controls

105 lines (78 loc) · 5.72 KB

PyABSA - Open Framework for Aspect-based Sentiment Analysis

PyPI - Python Version PyPI PyPI_downloads License

total views total views per week total clones total clones per week

PWC

Hi, there! Please star this repo if it helps you! Each Star helps PyABSA go further, many thanks.

Try our demos on Huggingface Space

Try our demos on Huggingface Space via API

import requests
r = requests.post(url='https://hf.space/embed/yangheng/PyABSA-APC/+/api/predict/',
                  json={"data": ["I have had my [ASP]computer[ASP] for 2 weeks already and it [ASP]works[ASP] perfectly . !sent!  Positive, Positive"]})
r.json()
import requests
r = requests.post(
    url='https://hf.space/embed/yangheng/PyABSA-ATEPC/+/api/predict/',
    json={"data": ['The wine list is incredible and extensive and diverse , '
                   'the food is all incredible and the staff was all very nice ,'
                   'good at their jobs and cultured .']})
r.json() 
import requests
r = requests.post(url='https://hf.space/embed/yangheng/PyABSA-ATEPC-Chinese/+/api/predict/',
                  json={"data": ["这款手机真的很薄,但是颜色不太好看,总体上我很满意啦。"]})
r.json()

Develop & Research based on PyABSA

Use Our Model via Transformers Model Hub

If you do not need the best models of APC or ATEPC, you can easily try our pretrained model to save your time!

To facilitate ABSA research and application, we train our fast-lcf-bert model based on the https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1 with all the english datasets provided by ABSADatasets, the model is available at yangheng/deberta-v3-base-absa-v1.1. You can use ** yangheng/deberta-v3-base-absa** to easily improve your model if your model is based on the transformers. e.g.:

Use Our Pretrained model to Classify Sentiments

The yangheng/deberta-v3-base-absa-v1.1 and yangheng/deberta-v3-large-absa-v1.1 are fine-tuned on the english datasets (30k+ examples) from ABSADatasets, and have the output layer to be used in the sentiment-analysis pipeline in huggingface hub.

from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
# model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-large-absa-v1.1")

inputs = tokenizer("[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP]", return_tensors="pt")
outputs = model(**inputs)

Use Our Pretrained model as a Backbone Model

The yangheng/deberta-v3-base-absa and yangheng/deberta-v3-large-absa are fine-tuned on the english datasets ( including the augmentation data, 180k+ examples) from ABSADatasets, and have no output layer. They are more effective when being used as backbone model compared to v1.1

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa")
model = AutoModel.from_pretrained("yangheng/deberta-v3-base-absa")
# model = AutoModel.from_pretrained("yangheng/deberta-v3-large-absa")

inputs = tokenizer("good product especially video and audio quality fantastic.", return_tensors="pt")
outputs = model(**inputs)