Skip to content
/ LAV Public
forked from dotchen/LAV

check discussion in updated link: https://github.com/Kin-Zhang/carla-expert/discussions; 用以后续进行方法对比及参考;原:(CVPR 2022) A minimalistic stack for joint perception, prediction, planning and control for end-to-end self driving.

License

Notifications You must be signed in to change notification settings

Kin-Zhang/LAV

 
 

Repository files navigation

LAV

原repo地址:https://github.com/dotchen/LAV

原paper: Learning from all vehicles

此处仅作为参考对比代码,更多讨论见discussion部分!Please click discussion section to discussion on this task.

Getting Started

  • To run CARLA and train the models, make sure you are using a machine with at least a mid-end GPU.

  • Please follow INSTALL.md to setup the environment.

    1. clone 与 git_lfs下载

      git clone --recurse-submodules git@github.com:Kin-Zhang/LAV.git
      curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
    2. 环境 与 CUDA安装 11.3

      conda env create -f environment.yaml
      conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
      conda install pytorch-scatter -c pyg
  • We also release our LAV dataset. Download the dataset HERE. [还没有开数据集是如何收集的,本来以为不太多 打算试试,然后一看 emmm 太多label了 还是等作者开把... ]

Training

We adopt a LBC-style staged privileged distillation framework. Please refer to TRAINING.md for more details.

以下为简单版复制阶段:

  1. Privileged Motion Planning

    python -m lav.train_bev
  2. Semantic Segmentation

    python -m lav.train_seg
  3. RGB Braking Prediction

    python -m lav.train_bra
  4. Point Painting, 主要是针对前面训出来的 对数据集进行添加

    python -m lav.data_paint
  5. Perception Pre-training

    python -m lav.train_full --perceive-only
  6. End-to-end Training

    python -m lav.train_full

Evaluation

We additionally provide examplery trained weights in the weights folder if you would like to directly evaluate. They are trained on Town01, 03, 04, 06. Make sure you are launching CARLA with the -vulkan flag.

运行 ./leaderboard/scripts/run_evaluation.sh 其中文件可改为此处

#!/bin/bash

#!改这两个地址=====
export CARLA_ROOT=/home/kin/CARLA
export LAV=/home/kin/lav

export LEADERBOARD_ROOT=${LAV}/leaderboard
export SCENARIO_RUNNER_ROOT=${LAV}/scenario_runner
export PYTHONPATH=$PYTHONPATH:"${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":${CARLA_ROOT}/CARLA/PythonAPI/carla/dist/carla-0.9.11-py3.7-linux-x86_64.egg
export TEAM_AGENT=${LAV}/team_code/lav_agent.py
export TEAM_CONFIG=${LAV}/team_code/config.yaml

export SCENARIOS=${LEADERBOARD_ROOT}/data/all_towns_traffic_scenarios_public.json
export ROUTES=${LEADERBOARD_ROOT}/data/routes_devtest.xml
export REPETITIONS=1
export CHECKPOINT_ENDPOINT=results.json
export DEBUG_CHALLENGE=0
export CHALLENGE_TRACK_CODENAME=SENSORS

python3 ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py \
--scenarios=${SCENARIOS}  \
--routes=${ROUTES} \
--repetitions=${REPETITIONS} \
--track=${CHALLENGE_TRACK_CODENAME} \
--checkpoint=${CHECKPOINT_ENDPOINT} \
--agent=${TEAM_AGENT} \
--agent-config=${TEAM_CONFIG} \
--debug=${DEBUG_CHALLENGE} \
--record=${RECORD_PATH} \
--resume=${RESUME}

Use ROUTES=assets/routes_lav_valid.xml to run our ablation routes, or ROUTES=leaderboard/data/routes_valid.xml for the validation routes provided by leaderboard.

Acknowledgements

We thank Tianwei Yin for the pillar generation code. The ERFNet codes are taken from the official ERFNet repo.

License

This repo is released under the Apache 2.0 License (please refer to the LICENSE file for details).

About

check discussion in updated link: https://github.com/Kin-Zhang/carla-expert/discussions; 用以后续进行方法对比及参考;原:(CVPR 2022) A minimalistic stack for joint perception, prediction, planning and control for end-to-end self driving.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.5%
  • Dockerfile 2.5%