DrawingProcess
๋“œํ”„ DrawingProcess
DrawingProcess
์ „์ฒด ๋ฐฉ๋ฌธ์ž
์˜ค๋Š˜
์–ด์ œ
ยซ   2025/05   ยป
์ผ ์›” ํ™” ์ˆ˜ ๋ชฉ ๊ธˆ ํ† 
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
  • ๋ถ„๋ฅ˜ ์ „์ฒด๋ณด๊ธฐ (964)
    • Profile & Branding (22)
      • Career (15)
    • IT Trends (254)
      • Conference, Faire (Experien.. (31)
      • News (187)
      • Youtube (19)
      • TED (8)
      • Web Page (2)
      • IT: Etc... (6)
    • Contents (97)
      • Book (66)
      • Lecture (31)
    • Project Process (94)
      • Ideation (0)
      • Study Report (34)
      • Challenge & Award (22)
      • 1Day1Process (5)
      • Making (5)
      • KRC-FTC (Team TC(5031, 5048.. (10)
      • GCP (GlobalCitizenProject) (15)
    • Study: ComputerScience(CS) (72)
      • CS: Basic (9)
      • CS: Database(SQL) (5)
      • CS: Network (14)
      • CS: OperatingSystem (3)
      • CS: Linux (39)
      • CS: Etc... (2)
    • Study: Software(SW) (95)
      • SW: Language (29)
      • SW: Algorithms (1)
      • SW: DataStructure & DesignP.. (1)
      • SW: Opensource (15)
      • SW: Error Bug Fix (43)
      • SW: Etc... (6)
    • Study: Artificial Intellige.. (149)
      • AI: Research (1)
      • AI: 2D Vision(Det, Seg, Tra.. (35)
      • AI: 3D Vision (70)
      • AI: MultiModal (3)
      • AI: SLAM (0)
      • AI: Light Weight(LW) (3)
      • AI: Data Pipeline (7)
      • AI: Machine Learning(ML) (1)
    • Study: Robotics(Robot) (33)
      • Robot: ROS(Robot Operating .. (9)
      • Robot: Positioning (8)
      • Robot: Planning & Control (7)
    • Study: DeveloperTools(DevTo.. (83)
      • DevTool: Git (12)
      • DevTool: CMake (13)
      • DevTool: NoSQL(Elastic, Mon.. (25)
      • DevTool: Container (17)
      • DevTool: IDE (11)
      • DevTool: CloudComputing (4)
    • ์ธ์ƒ์„ ์‚ด๋ฉด์„œ (64)
      • ๋‚˜์˜ ์ทจ๋ฏธ๋“ค (7)
      • ๋‚˜์˜ ์ƒ๊ฐ๋“ค (42)
      • ์—ฌํ–‰์„ ๋– ๋‚˜์ž~ (10)
      • ๋ถ„๊ธฐ๋ณ„ ํšŒ๊ณ  (5)

๊ฐœ๋ฐœ์ž ๋ช…์–ธ

โ€œ ๋งค์ฃผ ๋ชฉ์š”์ผ๋งˆ๋‹ค ๋‹น์‹ ์ด ํ•ญ์ƒ ํ•˜๋˜๋Œ€๋กœ ์‹ ๋ฐœ๋ˆ์„ ๋ฌถ์œผ๋ฉด ์‹ ๋ฐœ์ด ํญ๋ฐœํ•œ๋‹ค๊ณ  ์ƒ๊ฐํ•ด๋ณด๋ผ.
์ปดํ“จํ„ฐ๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ๋Š” ์ด๋Ÿฐ ์ผ์ด ํ•ญ์ƒ ์ผ์–ด๋‚˜๋Š”๋ฐ๋„ ์•„๋ฌด๋„ ๋ถˆํ‰ํ•  ์ƒ๊ฐ์„ ์•ˆ ํ•œ๋‹ค. โ€

- Jef Raskin

๋งฅ์˜ ์•„๋ฒ„์ง€ - ์• ํ”Œ์ปดํ“จํ„ฐ์˜ ๋งคํ‚จํ† ์‹œ ํ”„๋กœ์ ํŠธ๋ฅผ ์ฃผ๋„

์ธ๊ธฐ ๊ธ€

์ตœ๊ทผ ๊ธ€

์ตœ๊ทผ ๋Œ“๊ธ€

ํ‹ฐ์Šคํ† ๋ฆฌ

hELLO ยท Designed By ์ •์ƒ์šฐ.
DrawingProcess

๋“œํ”„ DrawingProcess

[Dataset] Autonomous Driving Open Dataset: KITTI Dataset (Visual Odometry/SLAM, 3D Object Detection)
Study: Artificial Intelligence(AI)/AI: Data Pipeline

[Dataset] Autonomous Driving Open Dataset: KITTI Dataset (Visual Odometry/SLAM, 3D Object Detection)

2023. 12. 26. 18:19
๋ฐ˜์‘ํ˜•
๐Ÿ’ก ๋ณธ ๋ฌธ์„œ๋Š” 'Autonomous Driving Open Dataset: KITTI Dataset (Visual Odometry, 3D Object Detection)'์— ๋Œ€ํ•ด ์ •๋ฆฌํ•ด๋†“์€ ๊ธ€์ž…๋‹ˆ๋‹ค.
์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์„ผ์„œ ๋ฐ์ดํ„ฐ์…‹ ์ค‘ ํ•˜๋‚˜์ธ KITTI 360 Dataset์— ๋Œ€ํ•ด ์ •๋ฆฌํ•˜์˜€์œผ๋‹ˆ ์ฐธ๊ณ ํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.

KITTI dataset ์†Œ๊ฐœ

๋ฐ์ดํ„ฐ์…‹ ์†Œ๊ฐœ(KITTI dataset)

  • Datasets collected with a car driving around rural areas of a city.
  • Lidar์™€ ์—ฌ๋Ÿฌ ๋Œ€์˜ Camera๋กœ ์ˆ˜์ง‘ํ•œ ๋ฐ์ดํ„ฐ์™€ ์ด์— ๋Œ€ํ•œ ์ผ๋ถ€์˜ ๋ผ๋ฒจ๋กœ ๋ฐ์ดํ„ฐ์…‹์€ ๊ตฌ์„ฑ
  • ์ œ์ž‘ ์—…์ฒด : KIT (Karlsruhe Institute oof Technology)
  • ๋ฐ์ดํ„ฐ์…‹ ๊ณต์‹ ๋งํฌ : https://www.cvlibs.net/datasets/kitti/

๋ฐ์ดํ„ฐ์…‹ ์†Œ๊ฐœ(KITTI Sensor)

์‚ฌ์šฉ ์„ผ์„œ ๋ชจ๋ธ๋ช…

  • Inertial Navigation System (GPS/IMU): OXTS RT 3003 (250Hz์˜ ๋น ๋ฅธ ์ฃผ๊ธฐ, cm๋‹จ์œ„์˜ ์˜ค์ฐจ๊ฐ€ ๋ฐœ์ƒํ•˜๊ธฐ์— ๋†’์€ ์„ฑ๋Šฅ)
  • Laserscanner: Velodyne HDL-64E (10๋งŒ๊ฐœ points/1sec ์ธก์ • ๊ฐ€๋Šฅ)
  • Camera (์ตœ๋Œ€ ์…”ํ„ฐ ์Šคํ”ผ๋“œ : 2ms Lidar์˜ ํŠธ๋ฆฌ๊ฑฐ์— ๋งž์ถ”์–ด ์ดฌ์˜)
    • Grayscale cameras, 1.4 Megapixels: Point Grey Flea 2 (FL2-14S3M-C)
    • Color cameras, 1.4 Megapixels: Point Grey Flea 2 (FL2-14S3C-C)
    • Varifocal lenses, 4-8 mm: Edmund Optics NT59-917 

๋ฐ์ดํ„ฐ์…‹ ๊ตฌ์„ฑ

  • Raw (unsynced+unrectified) and processed (synced+rectified) grayscale stereo sequences (0.5 Megapixels, stored in png format)
  • Raw (unsynced+unrectified) and processed (synced+rectified) color stereo sequences (0.5 Megapixels, stored in png format)
  • 3D Velodyne point clouds (100k points per frame, stored as binary float matrix)
  • 3D GPS/IMU data (location, speed, acceleration, meta information, stored as text file)
  • Calibration (Camera, Camera-to-GPS/IMU, Camera-to-Velodyne, stored as text file)
  • 3D object tracklet labels (cars, trucks, trams, pedestrians, cyclists, stored as xml file)

KITTI dataset ์ข…๋ฅ˜

Visual Odometry / SLAM Evaluation 2012

22๊ฐœ์˜ ์Šคํ…Œ๋ ˆ์˜ค ์‹œํ€€์Šค, ์ดํ•ฉ 39.2km ๊ธธ์ด.

GT: GPS/IMU localization unit์„ projection ์‹œ์ผœ์„œ ์–ป์Œ

๋ฐ์ดํ„ฐ์…‹ ๊ตฌ์„ฑ

โ”œโ”€โ”€ dataset
โ”‚   โ”œโ”€โ”€ poses
โ”‚   โ””โ”€โ”€ sequences
โ”‚       โ”œโ”€โ”€ 00
โ”‚       โ”‚   โ”œโ”€โ”€ image_0
โ”‚       โ”‚   โ”œโ”€โ”€ image_1
โ”‚       โ”‚   โ”œโ”€โ”€ image_2
โ”‚       โ”‚   โ”œโ”€โ”€ image_3
โ”‚       โ”‚   โ””โ”€โ”€ velodyne
โ”‚       โ”œโ”€โ”€ 01
โ”‚       โ”‚   โ”œโ”€โ”€ image_0
โ”‚       โ”‚   โ”œโ”€โ”€ image_1
โ”‚       โ”‚   โ”œโ”€โ”€ image_2
โ”‚       โ”‚   โ”œโ”€โ”€ image_3
โ”‚       โ”‚   โ””โ”€โ”€ velodyne
โ”‚       ...
โ”‚       โ””โ”€โ”€ 21
โ”‚           โ”œโ”€โ”€ image_0
โ”‚           โ”œโ”€โ”€ image_1
โ”‚           โ”œโ”€โ”€ image_2
โ”‚           โ”œโ”€โ”€ image_3
โ”‚           โ””โ”€โ”€ velodyne
โ”œโ”€โ”€ devkit
โ”‚   โ””โ”€โ”€ cpp

๋ฐ์ดํ„ฐ์…‹ ๋‹ค์šด๋กœ๋“œ

Visual Odometry / SLAM Evaluation 2012 https://www.cvlibs.net/datasets/kitti/eval_odometry.php

#!/bin/bash
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/data_odometry_gray.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/data_odometry_color.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/data_odometry_velodyne.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/data_odometry_calib.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/data_odometry_poses.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/devkit_odometry.zip

unzip '*.zip'
  • Download odometry data set (grayscale, 22 GB)
  • Download odometry data set (color, 65 GB)
  • Download odometry data set (velodyne laser data, 80 GB)
  • Download odometry data set (calibration files, 1 MB)
  • Download odometry ground truth poses (4 MB)
  • Download odometry development kit (1 MB)
  • Lee Clement and his group (University of Toronto) have written some python tools for loading and parsing the KITTI raw and odometry datasets

์ถ”๊ฐ€ ์ž๋ฃŒ

  • [Blog] Exploring KITTI Dataset: Visual Odometry for autonomous vehicles: https://medium.com/@jaimin-k/exploring-kitti-visual-ododmetry-dataset-8ac588246cdc

3d object detection

cars, vans, trucks, pedestrians, cyclists and trams ๊ฐ™์€ ์˜ค๋ธŒ์ ํŠธ์— ๋Œ€ํ•ด์„œ 3D bbox๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žˆ์Œ.

Velodyne system์— ์˜ํ•ด์„œ 3D point clouds๋ฅผ ๋งŒ๋“ ๋‹ค์Œ์— ์ด๋ฏธ์ง€๋กœ projectionํ•˜๋Š” ์‹์œผ๋กœ ๋ ˆ์ด๋ธ”๋ง ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— ์ •ํ™•. 

๋ฐ์ดํ„ฐ์…‹ ๊ตฌ์„ฑ

kitti
โ”œโ”€โ”€ ImageSets
โ”‚   โ”œโ”€โ”€ train.txt
โ”‚   โ””โ”€โ”€ val.txt
โ”œโ”€โ”€ training/
โ”‚   โ”œโ”€โ”€ 00/
โ”‚   โ”‚   โ”œโ”€โ”€ image_0
โ”‚   โ”‚   โ”œโ”€โ”€ image_1
โ”‚   โ”‚   โ”œโ”€โ”€ image_2
โ”‚   โ”‚   โ”œโ”€โ”€ image_3
โ”‚   โ”‚   โ”œโ”€โ”€ calib/
โ”‚   โ”‚   โ””โ”€โ”€ velodyne/
โ”‚   ...
โ”‚   โ””โ”€โ”€ 21
โ”‚       โ”œโ”€โ”€ image_0
โ”‚       โ”œโ”€โ”€ image_1
โ”‚       โ”œโ”€โ”€ image_2
โ”‚       โ”œโ”€โ”€ image_3
โ”‚       โ””โ”€โ”€ velodyne
โ””โ”€โ”€ testing/
    โ”œโ”€โ”€ 00/
    โ”‚   โ”œโ”€โ”€ image_0
    โ”‚   โ”œโ”€โ”€ image_1
    โ”‚   โ”œโ”€โ”€ image_2
    โ”‚   โ”œโ”€โ”€ image_3
    โ”‚   โ”œโ”€โ”€ calib/
    โ”‚   โ””โ”€โ”€ velodyne/
    ...
    โ””โ”€โ”€ 21
        โ”œโ”€โ”€ image_0
        โ”œโ”€โ”€ image_1
        โ”œโ”€โ”€ image_2
        โ”œโ”€โ”€ image_3
        โ””โ”€โ”€ velodyne

๋ฐ์ดํ„ฐ์…‹ ๋‹ค์šด๋กœ๋“œ

3D Object Detection Evaluation 2017: https://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d

BASE_DIR="$1"/kitti

mkdir -p $BASE_DIR

url_velodyne="https://s3.eu-central-1.amazonaws.com/avg-kitti/data_object_velodyne.zip"
url_calib="https://s3.eu-central-1.amazonaws.com/avg-kitti/data_object_calib.zip"
url_label="https://s3.eu-central-1.amazonaws.com/avg-kitti/data_object_label_2.zip"

wget -c -N -O $BASE_DIR'/data_object_velodyne.zip' $url_velodyne
wget -c -N -O $BASE_DIR'/data_object_calib.zip' $url_calib
wget -c -N -O $BASE_DIR'/data_object_label_2.zip' $url_label
  • Download left color images of object data set (12 GB)
  • Download right color images, if you want to use stereo information (12 GB)
  • Download the 3 temporally preceding frames (left color) (36 GB)
  • Download the 3 temporally preceding frames (right color) (36 GB)
  • Download Velodyne point clouds, if you want to use laser information (29 GB)
  • Download camera calibration matrices of object data set (16 MB)
  • Download training labels of object data set (5 MB)
  • Download object development kit (1 MB) (including 3D object detection and bird's eye view evaluation code)
  • Download pre-trained LSVM baseline models (5 MB) used in Joint 3D Estimation of Objects and Scene Layout (NIPS 2011). These models are referred to as LSVM-MDPM-sv (supervised version) and LSVM-MDPM-us (unsupervised version) in the tables below.
  • Download reference detections (L-SVM) for training and test set (800 MB)
  • Qianli Liao (NYU) has put together code to convert from KITTI to PASCAL VOC file format (documentation included, requires Emacs).
  • Karl Rosaen (U.Mich) has released code to convert between KITTI, KITTI tracking, Pascal VOC, Udacity, CrowdAI and AUTTI formats.
  • Jonas Heylen (TRACE vzw) has released pixel accurate instance segmentations for all 7481 training images.
  • We thank David Stutz and Bo Li for developing the 3D object detection benchmark.
  • Koray Koca (TUM) has released conversion scripts to export LIDAR data to Tensorflow records.

Calibration

Calibration Camera to Camera 

์นด๋ฉ”๋ผ Calibration ๊ด€๋ จ Metrix๋Š” ํ•ด๋‹น txt ํŒŒ์ผ(calib_cam_to_cam.txt)์„ ์ฐธ๊ณ ํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.

The file calib_cam_to_cam.txt contains parameters for 3 cameras.

  • S_0x: is the image size. You do not really need it for anything.
  • K_0x: is the intrinsics matrix. You can use it to create a cameraParameters object in MATLAB, but you have to transpose it, and add 1 to the camera center, because of MATLAB's 1-based indexing.
  • D_0x: are the distortion coefficients in the form [k1, k2, p1, p2, k3]. k1, k2, and k3 are the radial coefficients, and p1 and p2 are the tangential distortion coefficients.
  • R_0x and T_0x are the camera extrinsics. They seem to be a transformation from a common world coordinate system into each of the cameras' coordinate system.
  • S_rect_0x, R_rect_0x, and P_rect_0x are the parameters of the rectified images.

Given all that, here's what you should do.

  • Pick two cameras of the three, that you want to use.
  • Create a cameraParameters object for each camera using the intrinsics and the distortion parameters (K_0x and D_0x). Don't forget to transpose K and adjust for 1-based indexing.
  • From the extrinsics of the two cameras (R_0x's and T_0x's) compute the rotation and translation between the two cameras. (R and t).
  • Use the two cameraParameters objects together with R and t to create a stereoParameters object.

Calibration Camera to Camera (Code)

# KITTI calib_cam_to_cam.txt
# S_02: 1.392000e+03 5.120000e+02
# K_02: 9.597910e+02 0.000000e+00 6.960217e+02 0.000000e+00 9.569251e+02 2.241806e+02 0.000000e+00 0.000000e+00 1.000000e+00
# D_02: -3.691481e-01 1.968681e-01 1.353473e-03 5.677587e-04 -6.770705e-02
# R_02: 9.999758e-01 -5.267463e-03 -4.552439e-03 5.251945e-03 9.999804e-01 -3.413835e-03 4.570332e-03 3.389843e-03 9.999838e-01
# T_02: 5.956621e-02 2.900141e-04 2.577209e-03

calibration_matrix =[[9.597910e+02, 0.000000e+00, 6.960217e+02],
    [0.000000e+00, 9.569251e+02, 2.241806e+02],
    [0.000000e+00, 0.000000e+00, 1.000000e+00]]

R = [[9.999758e-01, -5.267463e-03, -4.552439e-03],
    [5.251945e-03, 9.999804e-01, -3.413835e-03],
    [4.570332e-03, 3.389843e-03, 9.999838e-01]]
    
T = [5.956621e-02, 2.900141e-04, 2.577209e-03 ]

R_T = [[9.999758e-01, -5.267463e-03, -4.552439e-03, 5.956621e-02,],
    [5.251945e-03, 9.999804e-01, -3.413835e-03, 2.900141e-04],
    [4.570332e-03, 3.389843e-03, 9.999838e-01,  2.577209e-03]]
    
fx = calibration_matrix[0][0]
ox = calibration_matrix[0][2]
fy = calibration_matrix[1][1]
oy = calibration_matrix[1][2]

์ฐธ๊ณ 

  • [Blog] SLAM์šฉ KITTI ๋ฐ์ดํ„ฐ์…‹ (KITTI Odometry) ๋น ๋ฅด๊ฒŒ ๋‹ค์šด๋กœ๋“œ ๋ฐ›๋Š” ๋ฐฉ๋ฒ•: https://www.cv-learn.com/20240304-fast-kitti-download/
  • [Blog] 3D Object Detection with Open3D-ML and PyTorch Backend: https://medium.com/@kidargueta/3d-object-detection-with-open3d-ml-and-pytorch-backend-b0870c6f8a85
๋ฐ˜์‘ํ˜•
์ €์ž‘์žํ‘œ์‹œ ๋น„์˜๋ฆฌ ๋ณ€๊ฒฝ๊ธˆ์ง€

'Study: Artificial Intelligence(AI) > AI: Data Pipeline' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

[Data] Python ์ด๋ฏธ์ง€ ์—ฌ๋ฐฑ ์ง€์šฐ๊ธฐ (numpy, mask, ...)  (0) 2024.04.30
[Data] Segmentation ๋ฐ์ดํ„ฐ ์••์ถ• ์•Œ๊ณ ๋ฆฌ์ฆ˜: Run Length Encoding(RLE) - coco mask to rle์™€ rle to mask ๊ฒ€์ฆ๊นŒ์ง€  (0) 2024.02.28
[Deploy] ONNX: ๋‹ค๋ฅธ DNN ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„ ๋ชจ๋ธ ํ˜ธํ™˜ ํฌ๋ฉง(pytorch, tensorflow, TensorRT, ...)  (1) 2024.02.14
[Dataset] Autonomous Driving Open Dataset: nuScenes Dataset(+ nuImages, nuPlan, Occupancy, nuReality)  (1) 2023.12.26
[Dataset] Autonomous Driving Open Dataset: Various Datasets  (0) 2023.09.09
    'Study: Artificial Intelligence(AI)/AI: Data Pipeline' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€
    • [Data] Segmentation ๋ฐ์ดํ„ฐ ์••์ถ• ์•Œ๊ณ ๋ฆฌ์ฆ˜: Run Length Encoding(RLE) - coco mask to rle์™€ rle to mask ๊ฒ€์ฆ๊นŒ์ง€
    • [Deploy] ONNX: ๋‹ค๋ฅธ DNN ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„ ๋ชจ๋ธ ํ˜ธํ™˜ ํฌ๋ฉง(pytorch, tensorflow, TensorRT, ...)
    • [Dataset] Autonomous Driving Open Dataset: nuScenes Dataset(+ nuImages, nuPlan, Occupancy, nuReality)
    • [Dataset] Autonomous Driving Open Dataset: Various Datasets
    DrawingProcess
    DrawingProcess
    ๊ณผ์ •์„ ๊ทธ๋ฆฌ์ž!

    ํ‹ฐ์Šคํ† ๋ฆฌํˆด๋ฐ”