Instructions to use FredZhang7/distilgpt2-stable-diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FredZhang7/distilgpt2-stable-diffusion with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="FredZhang7/distilgpt2-stable-diffusion")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("FredZhang7/distilgpt2-stable-diffusion") model = AutoModelForCausalLM.from_pretrained("FredZhang7/distilgpt2-stable-diffusion") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use FredZhang7/distilgpt2-stable-diffusion with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "FredZhang7/distilgpt2-stable-diffusion" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "FredZhang7/distilgpt2-stable-diffusion", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/FredZhang7/distilgpt2-stable-diffusion
- SGLang
How to use FredZhang7/distilgpt2-stable-diffusion with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "FredZhang7/distilgpt2-stable-diffusion" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "FredZhang7/distilgpt2-stable-diffusion", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "FredZhang7/distilgpt2-stable-diffusion" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "FredZhang7/distilgpt2-stable-diffusion", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use FredZhang7/distilgpt2-stable-diffusion with Docker Model Runner:
docker model run hf.co/FredZhang7/distilgpt2-stable-diffusion
DistilGPT2 Stable Diffusion Model Card
DistilGPT2 Stable Diffusion is a text generation model used to generate creative and coherent prompts for text-to-image models, given any text. This model was finetuned on 2.03 million descriptive stable diffusion prompts from Stable Diffusion discord, Lexica.art, and (my hand-picked) Krea.ai. I filtered the hand-picked prompts based on the output results from Stable Diffusion v1.4.
Compared to other prompt generation models using GPT2, this one runs with 50% faster forwardpropagation and 40% less disk space & RAM.
PyTorch
pip install --upgrade transformers
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# load the pretrained tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.max_len = 512
# load the fine-tuned model
model = GPT2LMHeadModel.from_pretrained('FredZhang7/distilgpt2-stable-diffusion')
# generate text using fine-tuned model
from transformers import pipeline
nlp = pipeline('text-generation', model=model, tokenizer=tokenizer)
ins = "a beautiful city"
# generate 10 samples
outs = nlp(ins, max_length=80, num_return_sequences=10)
# print the 10 samples
for i in range(len(outs)):
outs[i] = str(outs[i]['generated_text']).replace(' ', '')
print('\033[96m' + ins + '\033[0m')
print('\033[93m' + '\n\n'.join(outs) + '\033[0m')
- Downloads last month
- 13
