Qwen3, TTS, FFT & all models are now supported! 🦥

Phi-4 Reasoning: How to Run & Fine-tune

Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants

Microsoft's new Phi-4 reasoning models are now supported in Unsloth. The 'plus' variant performs on par with OpenAI's o1-mini, o3-mini and Sonnet 3.7. The 'plus' and standard reasoning models are 14B parameters while the 'mini' has 4B parameters. All Phi-4 reasoning uploads use our Unsloth Dynamic 2.0 methodology.

Phi-4 reasoning - Unsloth Dynamic 2.0 uploads:

Dynamic 2.0 GGUF (to run)
Dynamic 4-bit Safetensor (to finetune/deploy)

🖥️ Running Phi-4 reasoning

According to Microsoft, these are the recommended settings for inference:

  • Temperature = 0.8

  • Top_P = 0.95

Phi-4 reasoning Chat templates

Please ensure you use the correct chat template as the 'mini' variant has a different one.

Phi-4-mini:

<|system|>Your name is Phi, an AI math expert developed by Microsoft.<|end|><|user|>How to solve 3*x^2+4*x+5=1?<|end|><|assistant|>

Phi-4-reasoning and Phi-4-reasoning-plus:

This format is used for general conversation and instructions:

<|im_start|>system<|im_sep|>You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|><|im_start|>user<|im_sep|>What is 1+1?<|im_end|><|im_start|>assistant<|im_sep|>

Yes, the chat template/prompt format is this long!

🦙 Ollama: Run Phi-4 reasoning Tutorial

  1. Install ollama if you haven't already!

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh
  1. Run the model! Note you can call ollama servein another terminal if it fails. We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload.

ollama run hf.co/unsloth/Phi-4-mini-reasoning-GGUF:Q4_K_XL

📖 Llama.cpp: Run Phi-4 reasoning Tutorial

  1. Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
  1. Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose Q4_K_M, or other quantized versions.

# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "unsloth/Phi-4-mini-reasoning-GGUF",
    local_dir = "unsloth/Phi-4-mini-reasoning-GGUF",
    allow_patterns = ["*UD-Q4_K_XL*"],
)
  1. Run the model in conversational mode in llama.cpp. You must use --jinja in llama.cpp to enable reasoning for the models. This is however not needed if you're using the 'mini' variant.

./llama.cpp/llama-cli \
    --model unsloth/Phi-4-mini-reasoning-GGUF/Phi-4-mini-reasoning-UD-Q4_K_XL.gguf \
    --threads -1 \
    --n-gpu-layers 99 \
    --prio 3 \
    --temp 0.8 \
    --top-p 0.95 \
    --jinja \
    --min_p 0.00 \
    --ctx-size 32768 \
    --seed 3407

🦥 Fine-tuning Phi-4 with Unsloth

Phi-4 fine-tuning for the models are also now supported in Unsloth. To fine-tune for free on Google Colab, just change the model_name of 'unsloth/Phi-4' to 'unsloth/Phi-4-mini-reasoning' etc.

Last updated

Was this helpful?

OSZAR »