The PIDOG project is fantastic, and the build has been smooth.
However, I’m disappointed with the latency on ChatGPT. When I asked, “Who are you?” the response took 25 seconds.
There’s no difference in performance between GPT-4o-mini and GPT-3.5-turbo.
I’m using a Raspberry Pi 3 on LAN, with no firewall or VPN.
I’m wondering what steps I can take to improve the latency.
What results can I expect with a Raspberry Pi 4 and a USB 3 SSD?
Please share your experience.
Thank you!