Picar X won't connect to Ollama

I’ve managed to get everything working on my Picar X up until Section 17 - Ollama

When I try the 2. Test Ollama section , I get this error:

Hello, I am a helpful assistant. How can I help you?

>>> hello

Traceback (most recent call last):

File “/usr/lib/python3/dist-packages/urllib3/connection.py”, line 198, in _new_conn

sock = connection.create_connection(

    (self.\_dns_host, self.port),

...<2 lines>...

    socket_options=self.socket_options,

)

File “/usr/lib/python3/dist-packages/urllib3/util/connection.py”, line 85, in create_connection

raise err

File “/usr/lib/python3/dist-packages/urllib3/util/connection.py”, line 73, in create_connection

sock.connect**(sa)**

\~\~\~\~\~\~\~\~\~\~\~\~**^^^^**

ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File “/usr/lib/python3/dist-packages/urllib3/connectionpool.py”, line 787, in urlopen

response = self.\_make_request(

    conn,

...<10 lines>...

    \*\*response_kw,

)

File “/usr/lib/python3/dist-packages/urllib3/connectionpool.py”, line 493, in _make_request

conn.request**(**

\~\~\~\~\~\~\~\~\~\~\~\~**^**

    **method,**

    **^^^^^^^**

...<6 lines>...

    **enforce_content_length=enforce_content_length,**

    **^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^**

**)**

**^**

File “/usr/lib/python3/dist-packages/urllib3/connection.py”, line 445, in request

self.endheaders**()**

\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~**^^**

File “/usr/lib/python3.13/http/client.py”, line 1333, in endheaders

self.\_send_output**(message_body, encode_chunked=encode_chunked)**

\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~**^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^**

File “/usr/lib/python3.13/http/client.py”, line 1093, in _send_output

self.send**(msg)**

\~\~\~\~\~\~\~\~\~**^^^^^**

File “/usr/lib/python3.13/http/client.py”, line 1037, in send

self.connect**()**

\~\~\~\~\~\~\~\~\~\~\~\~**^^**

File “/usr/lib/python3/dist-packages/urllib3/connection.py”, line 276, in connect

self.sock = self.\_new_conn**()**

            \~\~\~\~\~\~\~\~\~\~\~\~\~\~**^^**

File “/usr/lib/python3/dist-packages/urllib3/connection.py”, line 213, in _new_conn

raise NewConnectionError(

    self, f"Failed to establish a new connection: {e}"

) from e

urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f7006dbe0>: Failed to establish a new connection: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File “/usr/lib/python3/dist-packages/requests/adapters.py”, line 667, in send

resp = conn.urlopen(

    method=request.method,

...<9 lines>...

    chunked=chunked,

)

File “/usr/lib/python3/dist-packages/urllib3/connectionpool.py”, line 841, in urlopen

retries = retries.increment(

    method, url, error=new_e, \_pool=self, \_stacktrace=sys.exc_info()\[2\]

)

File “/usr/lib/python3/dist-packages/urllib3/util/retry.py”, line 519, in increment

**raise MaxRetryError(\_pool, url, reason) from reason**  # type: ignore\[arg-type\]

**^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^**

urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host=‘192.168.50.20’, port=11434): Max retries exceeded with url: /api/chat (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7f7006dbe0>: Failed to establish a new connection: [Errno 111] Connection refused’))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File “/home/greglchapman/picar-x/example/17.text_vision_talk.py”, line 48, in

response = llm.prompt(input_text, stream=True, image_path=img_path)

File “/usr/local/lib/python3.13/dist-packages/sunfounder_voice_assistant/llm/llm.py”, line 236, in prompt

response = self.chat(stream, \*\*kwargs)

File “/usr/local/lib/python3.13/dist-packages/sunfounder_voice_assistant/llm/llm.py”, line 199, in chat

response = requests.post(self.url, headers=headers, data=json.dumps(data), stream=stream)

File “/usr/lib/python3/dist-packages/requests/api.py”, line 115, in post

return request("post", url, data=data, json=json, \*\*kwargs)

File “/usr/lib/python3/dist-packages/requests/api.py”, line 59, in request

return session.request**(method=method, url=url, \*\*kwargs)**

       \~\~\~\~\~\~\~\~\~\~\~\~\~\~\~**^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^**

File “/usr/lib/python3/dist-packages/requests/sessions.py”, line 589, in request

resp = self.send(prep, \*\*send_kwargs)

File “/usr/lib/python3/dist-packages/requests/sessions.py”, line 703, in send

r = adapter.send(request, \*\*kwargs)

File “/usr/lib/python3/dist-packages/requests/adapters.py”, line 700, in send

raise ConnectionError(e, request=request)

requests.exceptions.ConnectionError: HTTPConnectionPool(host=‘192.168.50.20’, port=11434): Max retries exceeded with url: /api/chat (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7f7006dbe0>: Failed to establish a new connection: [Errno 111] Connection refused’))

What’s going on?

The code cannot connect to Ollama (connection refused).
Please check the following:

  1. Ensure Ollama is running (or the desktop application is open). Run ollama serve.
  2. If using a remote computer, enable “Expose to network” in the Ollama settings.
  3. Double-check that the IP address in the code (ip="...") matches the correct LAN IP address.
  4. Confirm that both devices are connected to the same local network.

Done all this, still not working

We suggest you switch to a different network and then try running the test again.

Is your Ollama model installed on the Raspberry Pi system, or on a PC?

If it’s installed on the Raspberry Pi system, please follow the tutorial and run the commands below:

# Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

# Pull a lightweight model (good for testing)

ollama pull llama3.2:3b

# Quick run test (type ‘hi’ and press Enter)

ollama run llama3.2:3b

Then, in the test_llm_ollama.py code, modify it to:

llm = Ollama(ip=“localhost”,model=“llama3.2:3b” # you can replace with any model)

Save the changes and run the code again.

Not sure what you mean by change network? I only have one wifi network. Also, all of the other AI models work fine, so I don’t think it is the network

My Ollama model was originally installed on the PC, but now I have tried installing on the Raspberry Pi (4).

It installs okay, but the quick run test doesn’t work - it just hangs with the cursor rotating, until it times out with Error 500

Cheers, Greg C

That’s quite a big model, Check you’re not running out of memory/RAM. How much RAM/SD do you have? Try to increase swap, use zram tools etc

SPF650 is correct.

What is the current specification of the SD card you are using?
We recommend switching to an SD card with 32GB or more capacity.

I already have a 32GB SD card - looks like plenty of memory available

Just wondering - do I need a private key for Ollama? ChatGPT and Gemini both needed keys.

Ollama does not require users to manually configure private keys; the system automatically generates the necessary keys, allowing you to run or call it directly.

We recommend reinstalling the Ollama model:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2:3b
Then run a test:
ollama run llama3.2:3b

Did you check the ram whilst running the ollama quick test or separately? I would exoect to see it drop to close to zero. In any case, increasing swap to say 8gb should tell you.

I use vim editor in following but swap for your favorite text editor

#increase swap space (allocate more if needed): and zram compression

sudo apt install zram-tools

sudo dphys-swapfile swapoff

##Edit to increase swap as needed I use 8g on a 4g pi4

sudo vim /etc/dphys-swapfile

#AND SAME FOR this file again i use 8g

sudo vim /sbin/dphys-swapfile

sudo dphys-swapfile setup

sudo dphys-swapfile swapon

sudo reboot now

Tried this, didn’t work - looks like some sort of error

I tried reinstalling - no difference. The test just hangs and nothing happens until I manually terminate or the connection resets.

And yes, I tried checking the RAM while running the quick test - maybe this is the problem:

It hung at the end

So I guess I need to work out how to increase memory - as per previous message, the zram thing didn’t work

Any help appreciated

We recommend trying a smaller model to see if it resolves the issue.

ollama pull deepseek-r1:1.5b
ollama run deepseek-r1:1.5b

We also suggest using an SD card with more than 32GB of storage to see if it allows the model to function properly.

Post deleted as i didnt fully read all your messages. Seems like you have a few issues here. I dont have the skills to address them remotely without a lot more debug information, for example depending on OS version swap may be managed by systemctl services etc

Okay that seemed to work - it is super slow though. So must be a memory issue trying to run the models on a Raspberry Pi 4

Thanks for your help -looks like a memory issue after all.

1 Like

Thank you for closing the loop. It can really help others experiencing similar issues.

The slow AI interaction speed is likely related to the Raspberry Pi model being used.

For AI model interaction, we recommend using a more powerful Raspberry Pi 5 for optimal performance.