Skip to content

Instantly share code, notes, and snippets.

View lucataco's full-sized avatar
🎯
Focusing

Luis Catacora lucataco

🎯
Focusing
View GitHub Profile
@sayakpaul
sayakpaul / inference.md
Last active February 5, 2025 14:13
(Not so rigrously tested) example showing how to use `bitsandbytes`, `peft`, etc. to LoRA fine-tune Flux.1 Dev.

When loading the LoRA params (that were obtained on a quantized base model) and merging them into the base model, it is recommended to first dequantize the base model, merge the LoRA params into it, and then quantize the model again. This is because merging into 4bit quantized models can lead to some rounding errors. Below, we provide an end-to-end example:

  1. First, load the original model and merge the LoRA params into it:
from diffusers import FluxPipeline 
import torch 

ckpt_id = "black-forest-labs/FLUX.1-dev"
pipeline = FluxPipeline.from_pretrained(
@cubiq
cubiq / FLUX_Latent_Detailer.json
Last active March 3, 2025 09:44
FLUX dev Latent Space Detailer
{
"last_node_id": 469,
"last_link_id": 1401,
"nodes": [
{
"id": 16,
"type": "KSamplerSelect",
"pos": [
-280,
20
@lucataco
lucataco / predict.py
Last active January 31, 2025 20:28
Flux Schnell locally on MPS
# conda create -n flux python=3.11
# conda activate flux
# pip install torch==2.3.1
# pip install diffusers==0.30.0 transformers==4.43.3
# pip install sentencepiece==0.2.0 accelerate==0.33.0 protobuf==5.27.3
import torch
from diffusers import FluxPipeline
import diffusers
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@lucataco
lucataco / UbuntuMLsetup.sh
Last active September 22, 2024 03:52
Clean Ubuntu Install - Machine Learning setup
# Install Ubuntu 22.04
sudo apt-get update
sudo apt-get upgrade -y
# Install ssh, curl, git, htop
sudo apt install openssh-server
sudo apt install curl git htop zstd
# Install CUDA toolkit 12.4 drivers
https://developer.nvidia.com/cuda-downloads