PiCarX Connecting to Ollama online

When i add the base url as instructed to the code i get an error stating mulitple values.

from picarx.llm import OpenAI
from secret import OPENAI_API_KEY

INSTRUCTIONS = “You are a helpful assistant.”
WELCOME = “Hello, I am a helpful assistant. How can I help you?”

llm = OpenAI(
base_url=“https://api.openai.com/v1”,
api_key=OPENAI_API_KEY,
model=“gpt-4o”,
)

Set how many messages to keep

llm.set_max_messages(20)

Set instructions

llm.set_instructions(INSTRUCTIONS)

Set welcome message

llm.set_welcome(WELCOME)

print(WELCOME)

rk@pi:~/picar-x/example $ python3 18.online_llm_test.py
Traceback (most recent call last):
File “/home/rk/picar-x/example/18.online_llm_test.py”, line 7, in
llm = OpenAI(
base_url=“https://api.openai.com/v1”,
api_key=OPENAI_API_KEY,
model=“gpt-4o”,
)
File “/usr/local/lib/python3.13/dist-packages/sunfounder_voice_assistant/llm/init.py”, line 142, in init
super().init(*args,
~~~~~~~~~~~~~~~~^^^^^^^
base_url=“https://api.openai.com/v1”,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**kwargs)
^^^^^^^^^
TypeError: sunfounder_voice_assistant.llm.llm.LLM.init() got multiple values for keyword argument ‘base_url’

AFAIK You should be using LLM not openai for ollama

from picarx.llm import LLM # not openai 

Etc …

llm = LLM( # Not openai
    base_url="https://api.example.com/v1", # edit to whatever...
    api_key=API_KEY,
    model="your-model-name-here",          
)

The openai llm doesn’t have the base_url parameter.

Can you provide an example of what to use?

I’m not sure what more to add. The code i posted above is an example, just swap your equivalent lines for mine and fill in your providers url and your chosen model. I do not know what those are.

Or just use the final example directly from sunfounders site

Copied below….

from picarx.llm import LLM

from secret import API_KEY

INSTRUCTIONS = “You are a helpful assistant.”

WELCOME = “Hello, I am a helpful assistant. How can I help you?”

llm = LLM(

base_url="https://api.example.com/v1",  # fill in your provider’s base_url

api_key=API_KEY,

model="your-model-name-here",           # choose a model from your provider

)

I have tried using gemini and it works. But when i get to the last stage of adding a base url i get the same error stating the following.

TypeError: sunfounder_voice_assistant.llm.llm.LLM.init() got multiple values for keyword argument ‘base_url’

Adding the line base_url is causing an error with whatever LLM i choose. That command is obviously called somewhere else and is throwing there errors due to multiple calls.

Post your complete code for those llm models that you’ve tried. Include llm model chosen url and model. You say you’ve tried multiple ones, but just post a couple! Just to check for any theme.

I know there is another forum user here using ollama successfully on rpi. I’ve never used it myself, but ive had no problem with a couple of the other models ( although the api could have changed since I tried them) If the other user sees this chat, they may be able to post their code, otherwise Sunfounder will be more helpful than I when back in the office.

Sorry couldn’t really help.

I’m assuming it’s user error because the same result happens. Here is my code as i followed it according to the directions. Any help is greatly appreciated.

secret.py

Store secrets here. Never commit this file to Git.

GEMINI_API_KEY = “MY KEY HERE”

from picarx.llm import Gemini
from secret import GEMINI_API_KEY

INSTRUCTIONS = “You are a helpful assistant.”
WELCOME = “Hello, I am a helpful assistant. How can I help you?”

llm = Gemini(
base_url=“https://generativelanguage.googleapis.com”,
api_key=GEMINI_API_KEY,
model=“gemini-2.5-flash”,
)

rk@pi:~/picar-x/example $ sudo python3 18.online_llm_test.py
Traceback (most recent call last):
File “/home/rk/picar-x/example/18.online_llm_test.py”, line 7, in
llm = Gemini(
base_url=“https://generativelanguage.googleapis.com”,
api_key=GEMINI_API_KEY,
model=“gemini-2.5-flash”,
)
File “/usr/local/lib/python3.13/dist-packages/sunfounder_voice_assistant/llm/init.py”, line 213, in init
super().init(*args,
~~~~~~~~~~~~~~~~^^^^^^^
base_url=“https://generativelanguage.googleapis.com/v1beta/openai”,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**kwargs)
^^^^^^^^^
TypeError: sunfounder_voice_assistant.llm.llm.LLM.init() got multiple values for keyword argument ‘base_url’

Check 1: Gemini API Key Validity

Ensure your key is obtained from Google AI Studio and is not expired or over its quota.

The key format should be: AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX (no extra spaces or characters).

You need to save the correctly formatted Gemini API key in the secret.py file.

Check 2: Base URL Validity
We support the OpenAI API format and any compatible APIs. Each API provider has its own API specification and base_url. Please refer to their documentation.
Since you are using the Gemini model, please verify the accuracy and correctness of the URL you have configured.

I did try OpenAI first. I have a key that works and it successfully ran the modules in the earlier sections. When I added the OpenAI base url as the instructions shown I would get that error message as well. I used https://api.openai.com/v1 as the address. The Gemini key is valid as well and the base url is correct. From my research it appears that base_url is being used somewhere else in the programming. I chose Gemini to confirm my earlier findings.

Here: https://github.com/sunfounder/sunfounder-voice-assistant/blob/main/sunfounder_voice_assistant/llm/init.py

The base_url has been pre-configured for you. Therefore, when you select libraries such as Ollama, OpenAI, or Gemini, you do not need to configure the base_url. You only need to configure the API Key and model. Please refer to this example:

If you do not want to use our pre-configured OpenAI setup, you can directly use the LLM class as follows:

from picarx.llm import LLM

llm = LLM(
base_url=“example.com/”,
api_key=OPENAI_API_KEY,
model=“gpt-4o”,
)

This allows you to specify your own base_url.