Instructions to use peteromallet/Qwen-Image-Edit-InSubject with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use peteromallet/Qwen-Image-Edit-InSubject with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image-Edit", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("peteromallet/Qwen-Image-Edit-InSubject") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Training code
Hello, Great work, could you please provide the training code used for the LoRA even if its not polished?
Very sorry for the delay @Aleksandar ! This was trained with @ostris 's AI-Toolkit: https://github.com/ostris/ai-toolkit
He has a great tutorial: https://www.youtube.com/watch?v=d_b3GFFaui0
Very sorry for the delay @Aleksandar ! This was trained with @ostris 's AI-Toolkit: https://github.com/ostris/ai-toolkit
He has a great tutorial: https://www.youtube.com/watch?v=d_b3GFFaui0
I want to know the training data and training parameters.
你好,这个有训练方法吗?我看到你的训练集有训练参数吗?
Very sorry for the delay @Aleksandar ! This was trained with @ostris 's AI-Toolkit: https://github.com/ostris/ai-toolkit
He has a great tutorial: https://www.youtube.com/watch?v=d_b3GFFaui0
I want to know the training data and training parameters.
Uploaded the data here: https://huggingface.co/datasets/peteromallet/InSubject-Dataset