When I run the following Bash script on my Raspberry Pi 5, I get the error displayed below it:Here's the output of running the script:I built llama.cpp with VULKAN support as described on https://github.com/ggml-org/llama.cpp/b ... .md#vulkan.
Code:
#!/usr/bin/env bashset -euo pipefail# Usage: ./run.sh <model-id> <gpu_layers> <ctx-size> <predict-tokens>if [ $# -ne 4 ]; then echo "Usage: $0 <model-id> <gpu_layers> <ctx-size> <predict-tokens>" exit 1fiMODEL="$1"GPU_LAYERS="$2"CTX_SIZE="$3"PREDICT_TOKENS="$4"LLAMA_BIN="$HOME/llama.cpp/build/bin/llama-server"if [ ! -x "$LLAMA_BIN" ]; then echo "Error: llama-server binary not found at $LLAMA_BIN" echo "Make sure llama.cpp is built with Vulkan support." exit 1fi"$LLAMA_BIN" \ -hf "$MODEL" \ --host 0.0.0.0 \ --port 8080 \ --ctx-size "$CTX_SIZE" \ -n "$PREDICT_TOKENS" \ --n-gpu-layers "$GPU_LAYERS" \ --webuiCode:
user@raspberry-pi-5:~/Local-LLMs $ ./run_vl.sh unsloth/gemma-3-4b-it-GGUF:UD-Q4_K_XL 34 4096 3172ggml_vulkan: Found 1 Vulkan devices:ggml_vulkan: 0 = V3D 7.1.7.0 (V3DV Mesa) | uma: 1 | fp16: 0 | bf16: 0 | warp size: 16 | shared memory: 16384 | int dot: 0 | matrix cores: nonecommon_download_file_single_online: using cached file: /home/user/.cache/llama.cpp/unsloth_gemma-3-4b-it-GGUF_gemma-3-4b-it-UD-Q4_K_XL.ggufcommon_download_file_single_online: using cached file: /home/user/.cache/llama.cpp/unsloth_gemma-3-4b-it-GGUF_mmproj-F16.ggufmain: n_parallel is set to auto, using n_parallel = 4 and kv_unified = truebuild: 7650 (68b4d516c) with GNU 14.2.0 for Linux aarch64system info: n_threads = 4, n_threads_batch = 4, total_threads = 4system_info: n_threads = 4 (n_threads_batch = 4) / 4 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | init: using 6 threads for HTTP serverstart: binding port with default address familymain: loading modelsrv load_model: loading model '/home/user/.cache/llama.cpp/unsloth_gemma-3-4b-it-GGUF_gemma-3-4b-it-UD-Q4_K_XL.gguf'common_init_result: fitting params to device memory, for bugs during this step try to reproduce them with -fit off, or provide --verbose logs if the bug only occurs with -fit onggml_vulkan: Error: Shared memory size too small for matrix multiplication.llama_model_load: error loading model: Shared memory size too small for matrix multiplication.llama_model_load_from_file_impl: failed to load modelllama_params_fit: encountered an error while trying to fit params to free device memory: failed to load modelllama_params_fit: fitting params to free memory took 0.67 secondsllama_model_load_from_file_impl: using device Vulkan0 (V3D 7.1.7.0) (unknown id) - 4096 MiB freellama_model_loader: loaded meta data with 40 key-value pairs and 444 tensors from /home/user/.cache/llama.cpp/unsloth_gemma-3-4b-it-GGUF_gemma-3-4b-it-UD-Q4_K_XL.gguf (version GGUF V3 (latest))llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.llama_model_loader: - kv 0: general.architecture str = gemma3llama_model_loader: - kv 1: general.type str = modelllama_model_loader: - kv 2: general.name str = Gemma-3-4B-Itllama_model_loader: - kv 3: general.finetune str = itllama_model_loader: - kv 4: general.basename str = Gemma-3-4B-Itllama_model_loader: - kv 5: general.quantized_by str = Unslothllama_model_loader: - kv 6: general.size_label str = 4Bllama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unslothllama_model_loader: - kv 8: gemma3.context_length u32 = 131072llama_model_loader: - kv 9: gemma3.embedding_length u32 = 2560llama_model_loader: - kv 10: gemma3.block_count u32 = 34llama_model_loader: - kv 11: gemma3.feed_forward_length u32 = 10240llama_model_loader: - kv 12: gemma3.attention.head_count u32 = 8llama_model_loader: - kv 13: gemma3.attention.layer_norm_rms_epsilon f32 = 0.000001llama_model_loader: - kv 14: gemma3.attention.key_length u32 = 256llama_model_loader: - kv 15: gemma3.attention.value_length u32 = 256llama_model_loader: - kv 16: gemma3.rope.freq_base f32 = 1000000.000000llama_model_loader: - kv 17: gemma3.attention.sliding_window u32 = 1024llama_model_loader: - kv 18: gemma3.attention.head_count_kv u32 = 4llama_model_loader: - kv 19: gemma3.rope.scaling.type str = linearllama_model_loader: - kv 20: gemma3.rope.scaling.factor f32 = 8.000000llama_model_loader: - kv 21: tokenizer.ggml.model str = llamallama_model_loader: - kv 22: tokenizer.ggml.pre str = defaultllama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,262208] = ["<pad>", "<eos>", "<bos>", "<unk>", ...llama_model_loader: - kv 24: tokenizer.ggml.scores arr[f32,262208] = [-1000.000000, -1000.000000, -1000.00...llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,262208] = [3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, ...llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 2llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 106llama_model_loader: - kv 28: tokenizer.ggml.unknown_token_id u32 = 3llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 0llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = truellama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = falsellama_model_loader: - kv 32: tokenizer.chat_template str = {{ bos_token }}\n{%- if messages[0]['r...llama_model_loader: - kv 33: tokenizer.ggml.add_space_prefix bool = falsellama_model_loader: - kv 34: general.quantization_version u32 = 2llama_model_loader: - kv 35: general.file_type u32 = 15llama_model_loader: - kv 36: quantize.imatrix.file str = gemma-3-4b-it-GGUF/imatrix_unsloth.datllama_model_loader: - kv 37: quantize.imatrix.dataset str = unsloth_calibration_gemma-3-4b-it.txtllama_model_loader: - kv 38: quantize.imatrix.entries_count i32 = 238llama_model_loader: - kv 39: quantize.imatrix.chunks_count i32 = 663llama_model_loader: - type f32: 205 tensorsllama_model_loader: - type q4_K: 142 tensorsllama_model_loader: - type q5_K: 30 tensorsllama_model_loader: - type q6_K: 47 tensorsllama_model_loader: - type iq4_xs: 20 tensorsprint_info: file format = GGUF V3 (latest)print_info: file type = Q4_K - Mediumprint_info: file size = 2.36 GiB (5.23 BPW) load: 6242 unused tokensload: printing all EOG tokens:load: - 106 ('<end_of_turn>')load: special tokens cache size = 6415load: token to piece cache size = 1.9446 MBprint_info: arch = gemma3print_info: vocab_only = 0print_info: no_alloc = 0print_info: n_ctx_train = 131072print_info: n_embd = 2560print_info: n_embd_inp = 2560print_info: n_layer = 34print_info: n_head = 8print_info: n_head_kv = 4print_info: n_rot = 256print_info: n_swa = 1024print_info: is_swa_any = 1print_info: n_embd_head_k = 256print_info: n_embd_head_v = 256print_info: n_gqa = 2print_info: n_embd_k_gqa = 1024print_info: n_embd_v_gqa = 1024print_info: f_norm_eps = 0.0e+00print_info: f_norm_rms_eps = 1.0e-06print_info: f_clamp_kqv = 0.0e+00print_info: f_max_alibi_bias = 0.0e+00print_info: f_logit_scale = 0.0e+00print_info: f_attn_scale = 6.2e-02print_info: n_ff = 10240print_info: n_expert = 0print_info: n_expert_used = 0print_info: n_expert_groups = 0print_info: n_group_used = 0print_info: causal attn = 1print_info: pooling type = 0print_info: rope type = 2print_info: rope scaling = linearprint_info: freq_base_train = 1000000.0print_info: freq_scale_train = 0.125print_info: freq_base_swa = 10000.0print_info: freq_scale_swa = 1print_info: n_ctx_orig_yarn = 131072print_info: rope_yarn_log_mul= 0.0000print_info: rope_finetuned = unknownprint_info: model type = 4Bprint_info: model params = 3.88 Bprint_info: general.name = Gemma-3-4B-Itprint_info: vocab type = SPMprint_info: n_vocab = 262208print_info: n_merges = 0print_info: BOS token = 2 '<bos>'print_info: EOS token = 106 '<end_of_turn>'./run_vl.sh: line 31: 1436 Segmentation fault "$LLAMA_BIN" -hf "$MODEL" --host 0.0.0.0 --port 8080 --ctx-size "$CTX_SIZE" -n "$PREDICT_TOKENS" --n-gpu-layers "$GPU_LAYERS" --webuiStatistics: Posted by NovaPiRaspberry — Sat Jan 10, 2026 10:27 pm