-
-
Notifications
You must be signed in to change notification settings - Fork 118
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
I'm trying to run the example code without GPU.
In the console I see the following error:
llama_model_load_from_file_impl: invalid value for main_gpu: 0 (available devices: 0)
Using LLamaSharp directly I can get around this error by setting the GpuLayerCount
property of ModelParams
to 0
.
Steps to reproduce the bug
I've tried the following code:
// get model path
var modelPath = HuggingFaceModelDownloader.GetModelAsync(
repository: "TheBloke/Thespis-13B-v0.5-GGUF",
fileName: "thespis-13b-v0.5.Q2_K.gguf",
version: "main");
modelPath.Wait();
// load model
var model = new LLamaSharpModelInstruction(new LLamaSharpConfiguration
{
// MainGpu = -1,
PathToModelFile = modelPath.Result,
Temperature = 0,
GpuLayerCount = 0
}).UseConsoleForDebug();
// building a chain
var prompt = @"
You are an AI assistant that greets the world.
World: Hello, Assistant!
Assistant:";
var chain =
Set(prompt, outputKey: "prompt")
| LLM(model, inputKey: "prompt");
chain.RunAsync().Wait();
Expected behavior
The code should also run without a GPU.
Screenshots
No response
NuGet package version
<PackageReference Include="LLamaSharp" Version="0.23.0" />
<PackageReference Include="LLamaSharp.Backend.Cpu" Version="0.23.0" />
<PackageReference Include="LLamaSharp.semantic-kernel" Version="0.23.0" />
<PackageReference Include="LLamaSharp.kernel-memory" Version="0.23.0" />
<PackageReference Include="LangChain" Version="0.17.0" />
<PackageReference Include="LangChain.Providers.LLamaSharp" Version="0.17.0" />
Additional context
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working