Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A better walkthrough of the training data collection process? #127

Open
CT83 opened this issue Apr 16, 2018 · 3 comments
Open

A better walkthrough of the training data collection process? #127

CT83 opened this issue Apr 16, 2018 · 3 comments

Comments

@CT83
Copy link

CT83 commented Apr 16, 2018

Could you please give me a clearer overview of how the training data was collected?
A brief run down of your tech stack for streaming and the Python files which were would greatly help.
I am currently using
raspivid -n -t 0 -rot 270 -w 960 -h 720 -fps 30 -b 6000000 -o - | gst-launch-1.0 -e -vvvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host=192.168.1.2 port=5000
to stream video

Then view it using gstreamer.
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false

I am also trying Hamuchiwa's Approach no luck yet. Which collection method would be better?

That seems to work fine but I don't know how I would pipe/send this to OpenCV?
An clearer explanation on your method would greatly help me.

@RyanZotti
Copy link
Owner

It’s late at night on the West Coast right now, so I’m just going to post a short comment for now.

Very short answer: my data collection process really sucks (because it’s unitinuitive) and I’m currently refactoring it based on best practices from here: http://docs.donkeycar.com/guide/get_driving/. If you scroll down about half way on that page you’ll see a really slick mobile web UI that you can use to gather training data. The full refactoring will take another 5-6 week’s most likely. I highly recommend looking at the donkeycar repo in general.

The long answer is sort of in the main readme of my repo, but if that’s insuifficent then a better answer will take me too long to type over my phone now. I’ll reply back to this issue with better instructions once I’ve finished he refactoring.

@CT83
Copy link
Author

CT83 commented Apr 16, 2018

Okay, I am now reading more into the code and the readme. Respond with a thumbs up, so I know I am going in the right direction.

  1. We install FFMPEG to stream video from RPi to PC
  2. Use stream_mjpeg_video.py and display it using OpenCV streamer from util.py to confirm it is working.
  3. Then basically save_streaming_video_data.py saves streamed video on local pc and the server logs the key strokes.
  4. We copy paste them to a single location.
  5. Then cross reference the saved images time stamp with the saved keystrokes timestamps and make clean training data

@CT83
Copy link
Author

CT83 commented Apr 17, 2018

Not really relevant here but adding it incase someone else stumbled on this problem, I was using the code here, to capture the frames and stream back to the server.
I had used
for _ in camera.capture_continuous(stream, 'jpeg'):
instead of
for _ in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
That was the source of the problem!! Getting smooth 20-30 FPS now!

Thanks a bunch for all your help @RyanZotti

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants