Instructions to use predibase/hellaswag_processed with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use predibase/hellaswag_processed with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1") model = PeftModel.from_pretrained(base_model, "predibase/hellaswag_processed") - Notebooks
- Google Colab
- Kaggle
Description: Sentence completion
Original dataset: https://huggingface.co/datasets/Rowan/hellaswag
---
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land!
The adapter_category is Other and the name is Open-Ended Sentence Completion (hellaswag)
---
Sample input: You are provided with an incomplete passage below. Please read the passage and then finish it with an appropriate response. For example:\n\n### Passage: My friend and I think alike. We\n\n### Ending: often finish each other's sentences.\n\nNow please finish the following passage:\n\n### Passage: Numerous people are watching others on a field. Trainers are playing frisbee with their dogs. the dogs\n\n### Ending:
---
Sample output: are running around the field.
---
Try using this adapter yourself!
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/hellaswag_processed"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
- Downloads last month
- 21
Model tree for predibase/hellaswag_processed
Base model
mistralai/Mistral-7B-v0.1