Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model used #1

Open
linux-devil opened this issue May 24, 2018 · 6 comments
Open

Model used #1

linux-devil opened this issue May 24, 2018 · 6 comments

Comments

@linux-devil
Copy link

Which model is used for traffic light perception ?

@YannZyl
Copy link
Owner

YannZyl commented May 27, 2018

ResNet50-R-FCN for traffic lights detection in Rectifier. In Recognizer for any traffic light detected in Rectifier, an AlexNet-like CNN(5 convs+2fcs) is used to classify traffic light types/colors.

@linux-devil
Copy link
Author

And I am too confused about their lane prediction step . From what all I understand going through the code is , they use CNN segmentation and pass road patches for lane detection and then use curve fitting to find and extend lane line. Any clarities on lane detection and prediction steps?

@YannZyl
Copy link
Owner

YannZyl commented May 27, 2018

Actually, CNN segmentation only be used for obstacle segmentation in obstacle perception. Any lane detection and prediction in prediction, planning modules are achieved through querying hdmap. Hdmap stores all lane informations, such as lane id, lane accumulated sum length(accumulated_s), neighbor lanes, left boundary, right boundary. A lane is
splited into n small straight segments, each segment contains start point, start accumulated_s, end point, end accumulated_s, unit direction and so on.

There is a example in prediction module. If we hope to predict obstacle's trajectory in the future(e.g. next 100m), then we can do follow steps:
step 1. according obstacle's position, query hdmap to get lane information which obstacle current in. (e.g. now in 'lane_A', projection point's coordinate in 'lane_A', accumulate_s of projection point, relative distance to projection point)
step 2: calculate remaining length(laneA's total length - projection point's accumulated_s), if remaining length greater equal than 100m. the next trajectory is [accumulated_s, accumulated_s+100], we can also sample some path points within this lane interval. if remaining length lower than 100m. we can find lane_A's successor lane and clip a second interval with length of (100m - (laneA's total length - projection point's accumulated_s)).

So any lanes detection, extend or prediction depend on hdmap tightly.

@linux-devil
Copy link
Author

Thank you this is really helpful!

@linux-devil
Copy link
Author

https://www.youtube.com/watch?v=jiZhSIrmODk&t=7s , have you gone through this video from recent apollo meetup, they explain how the deep net is used for Lane detection. Any thoughts?

@YannZyl
Copy link
Owner

YannZyl commented May 30, 2018

Thank you very much for you remind. I have realized some improvements in version 2.5, almost all changes occur in the perception module. Now I am reading the planning module code, after finish this i'll review the perception module and update my repository. I am very happy to share my reading notes with you at that moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants