New Scripts for Pidog

I’ve been noodling with some custom python scripts for the pidog. I have quite a few now, thought I’d share with anyone who is interested. Some of them are just proof-of-concept, so I’d really love some help fleshing them out. Even if it’s just testing and giving some feedback.

I’ve put a description of each module, and my future plans for tweaks and upgrade.

The one I consider most complete is Voice Patrol. This runs a patrol routine that makes the pidog wander, map its environment, and avoid obstacles dynamically. It also has voice control with a growing list of commands, active while the pidog patrols. Voice command proved a little tricky to set up, and I’ve yet to write a proper installation guide for how I got it to work, but stay tuned. For now, the other patrol modules offer the same functionality without voice commands, all with different approaches to navigation and obstacle detection.

There are other modules for facial and object recognition, still very much in the early stages.

Anyone who wants to, give them a look and tell me what you think. If you have any suggestions for other modules, please let me know and I’ll add them, or you can push your own commit if you like. My ultimate goal is to eventually have a master script that imports and calls the other modules at need, for a truly autonomous and full-featured pidog.

Wanna help?

1 Like

Wow, amazing!

You can refer to PiDog’s GPT example where it patrols while listening for voice commands. Upon detecting an instruction, it stops, performs the corresponding action and response, then resumes patrolling.

Additionally, to avoid interference from servo noise, you may need to adjust the sound detection threshold.

Check the example here:
:backhand_index_pointing_right: pidog/gpt_examples at master · sunfounder/pidog · GitHub

1 Like

Brilliant, thanks for sharing.
I shared a few of my own code and scripts on this forum, for voice control etc

Demonstrated here..

As per your own software, the joystick control demo uses the IMU for gait adjustment for stability.

And autonomous behaviour using orbslam3 + AI object/hazard recognition, These data are fused with the camera extrinsic matrix to create a real world (scaled) occupancy map. E.g green safe floor, red = hazard

Which pidog then uses for autonomous safe route planning.
Partially shared on github and full integration demonstrated on FB.

I’ll download yours and have a play around to compare notes and read through your methodologies! Again many thanks.

1 Like

Awesome! I love your work. I’m not familiar with ROS at all but I’ll take a look at this today. Vision-based SLAM is on my wishlist, thanks for sharing.

1 Like

Your chat gpt method requires paying for openAI credits lol. Much more satisfying to find a free method, or host your own chatbot. I don’t understand why the only method you show in documentation for AI integration is also the most expensive option out there.

1 Like