Hi,
I have been experiencing problems with the quality of the camera that was included in the kit, it’s performance with the lighting has been poor even in well lit areas. Anything it views seems to be dark and shady. I have tried taking it outside and it did help a little but the main issue is it keeps any faces dark, making it increasingly difficult for Facial Recognition to work properly. Is this a defect or something that can be tweaked with code?
My second problem is with the fps. When I run Vilib.camera_start(vflip=False,hflip=False)
Vilib.display(local=True,web=True)
The most fps I have gotten out of it is ranging from 3-15 fps. However when I view through the camera through the app, the playback is near perfect.
Is there a way to fix this issue or is this also potentially related to a defect in the camera?
I am using a raspberry pi 3b+ and was using the legacy OS but switched to the 64 bit version as mediapipe would not install. This change did not seem to affect the quality or the fps.
I am new to raspberry pi, robots built on raspberry pi and python. I purchased this robot after seeing reviews say it was for the beginner and experienced alike. I am the beginner and need help understanding these problems and not just viewing the answers.
Thank you for your time,
ScienceKid41
Assuming you’re somehow remote accessing pidog? Then I’d take a guess that … (please anyone with more knowledge please correct me!) …The app probably runs peer to peer, so only limited by local intranet speed. Free VNC variants usually go via the suppliers systems, not direct, so will also depend on your external internet speeds. Remote access methods like ssh -X are local, but very inefficient at graphics. If you can plug a keyboard & monitor directly into pidog? You may then get full fps. Just my best guesses. Rvnc, for example, has a paid version, which allows peer to peer, maybe you can get a free trial version just to check?
Thank you for the replying!
I tried plugging the keyboard & monitor into the Pidog like you mentioned, and it did improve the display fps by a lot! I tried running several other programs that used the camera as well, but to my dismay I found that I ran into the same problem as before. The highest fps achieved was 2.7 fps.
This was running vilibs hands_detect.py, pose_detect.py and object_detect.py programs.
I suppose my problem relates to the programs and models it is running?
Does anybody know any solutions (and explanations) to this problem?
Thanks,
ScienceKid41
Those frame rates sound about right for those programs. They are intensive. What is your requirement? 2.7fps is plenty enough for many real world applications. We’d all like more of course…
I looked around online a bit and it seems 30fps can be reached using a raspberry pi 3 running mediapipe. I’m not sure how the code works that runs the vilib examples (if anybody knows how to access or view the code that the Vilib.switches run on that would be awesome) but if there is a way to play with that code a bit to reach a higher fps I would be very interested.
Also, looking at machine vision applications that do similar things that the pidog is capable of all reach much higher fps.
Could you possibly leave links to applications that run on around 2-3 fps?
Here are sites that I found that mention much higher fps (at least 8-10 fps)
I am completely new to raspberry pi’s, building robots with raspberry pi and python. So I could be completely wrong about the fps that can can be achieved using raspberry pi, but from what I did find it seems machine vision with higher fps is possible on a raspberry pi. I am not sure how the Vilib library works but rather know how to run basic examples from the library. Any details surrounding how to understand the Vilib library other than the already provided docs would be much appreciated.
Thank you for your help,
ScienceKid41
I agree that the camera thread itself should be able to hit 30fps… My own vision code hits this fps for the camera thread (I don’t use vilib) however, my analysis software takes about 200mS to execute, so my net frame analysis rate is about 5fps i.e I only use about every 6th frame from the thread. That’s just for my own code, I don’t know about the linked examples. Please also note that my robot doesn’t need to view the images on screen, thats just for me! So my max fps is only achieved without human visualisation. I’m also using ROS on the pidog which slows things down too.
This is of course just my own experience, there’s lots of people out there with more experience than I.
Currently, the vilib program runs on a single core. You can improve the frame rate by using a thread pool to run on multiple cores and manage frame processing. However, data exchange between multiple cores may incur overhead, so the actual increase in frame rate might not be significant. The longer a single inference takes, the more pronounced the effects will be. I have submitted a partially tested program to the multiPool branch; the code is not yet fully refined, but you can take a look here: multiPool branch.
Currently, the program has only undergone partial modifications and testing, and frame skipping has not been implemented: line 167.