This is a Udacity Self-Driving Car NanoDegree project submission that uses the following to detect road lanes:
- Lens calibration
- Distortion correction
- Edge detection
- Colorspace Transformation
- Perspective transformation
- Pixel dection and Line Fitting
Clone or fork this repository.
Intended user is the Udacity evaluator for this project. It is intended to be used in a Jupyter Notebook. Open alf.ipynb
for examples of usage.
writeup.md
: writeup of project for Udacity evaluator; includes images of pipeline stagesoutput_images/project_video.mp4
: project video for submissionalf_*.py
files: python code for projectalf_con.py
: main controller for sequencing pipeline stagesalf_cam.py
: camera for calibration and distortion correctionalf_enh.py
: enhancer for edge detection and HSV color transformationalf_war.py
: warper for perpective transformationalf_llg.py
: lane finders, sliding and linear window search areas, and line to find and annotate lanesalf_hud.py
: heads up display (simulated) for composing final image with lane areaalf_utils.py
: logging and demonstration functions
alf.ipynb
: jupyter notebook on project usage and demonstrations of important functionsoutput_images
folder:wup_*.png
: images used in writeupproject_video.mp4
: project video for submissionchallenge_video.pm4
: pipeline worked on challenge video tooharder_challenge_video.mp4
: pipeline failed on this harder challenge videoout_*_video_.mp4
: output of videos processed inalf.ipynb
examples; prevents overwriting project videos for submissionstraight_lines1-undist.jpg
: image used to find and plot srcpoints for perspective transformation
sandbox.ipynb
: jupyter notebook for testing codesketch.drawio
: UMLish sketch of component collaborations; requires diagrams.net to view; does not reflect current state of componentsadv_lane_fine.log
: debug log of project