Skip to content

Instantly share code, notes, and snippets.

@kvas7andy
Last active February 16, 2018 12:48
Show Gist options
  • Save kvas7andy/5c409d99979f7b76e61394065380d747 to your computer and use it in GitHub Desktop.
Save kvas7andy/5c409d99979f7b76e61394065380d747 to your computer and use it in GitHub Desktop.

Car Simulator in 3D outdoor towns. Main docs Well documented package.

Installation

  • Download latest release on documentation site
  • Build from source (less convenient)
  • Plus: build from source ImageConverter for depth/segmentation image convertor
  • Necessary libs for build:
sudo apt-get install libboost-all-dev
sudo apt-get install libpng-dev libtiff-dev libomp-dev
sudo apt-get install clang-3.8
sudo ln -s /usr/bin/clang++-3.8 /usr/bin/clang++

Usage

CARLA is server<-> client based framework

Manual simulator

Try out simulator with keyboard-control

./CarlaUE4.sh /Game/Maps/Town01 or ./CarlaUE4.sh /Game/Maps/Town02

Automatic generation of images

  • Start Carla server: ./CarlaUE4.sh /Game/Maps/Town01 -carla-server -benchmark -fps=15 -windowed -ResX=640 -ResY=480 --carla-settings="~/sims/carla/Example.CarlaSettings.ini"
  • -carla-server to start server-only mode
    • -benchmark -fps=15* to synchronize client and server for data retrievel (rgb, depth, segmentation)
  • -windowed -ResX=640 -ResY=480 to specify display parameters (only window, not camera & saved images specifications)
  • -carla-settings=...ini to specify configuration file!
  • Server-based configurations: host, port, synchronose-mode
  • Level-based configurations: vehicle-number, pedestrians, weather, etc.
  • Sensor-based configurations Cameras and their sensor-data (joined with camera parameters, such as ImageSizeX, ImageSizeY*)

Example from stereo_rgb_depth_segmentation.ini

[CARLA/SceneCapture]
SceneFinal=PostProcessing
Cameras=CameraStereoLeft/RGB,CameraStereoLeft/Depth,CameraStereoLeft/SemanticSegmentation,CameraStereoRight/RGB,CameraStereoRight/Depth,CameraStereoRight/SemanticSegmentation
ImageSizeX=720
ImageSizeY=512
CameraFOV=90
[CARLA/SceneCapture/CameraStereoLeft]
CameraPositionX=170
CameraPositionY=-30
CameraPositionZ=150
CameraRotationPitch=0a
CameraRotationRoll=0
CameraRotationYaw=0
[CARLA/SceneCapture/CameraStereoLeft/RGB]
PostProcessing=SceneFinal
[CARLA/SceneCapture/CameraStereoLeft/Depth]
PostProcessing=Depth
; Add semantics
[CARLA/SceneCapture/CameraStereoLeft/SemanticSegmentation]
PostProcessing=SemanticSegmentation
[CARLA/SceneCapture/CameraStereoRight]
CameraPositionX=170
CameraPositionY=30
CameraPositionZ=150
CameraRotationPitch=0
CameraRotationRoll=0
CameraRotationYaw=0
[CARLA/SceneCapture/CameraStereoRight/RGB]
PostProcessing=SceneFinal
[CARLA/SceneCapture/CameraStereoRight/Depth]
PostProcessing=Depth
; Add semantics
[CARLA/SceneCapture/CameraStereoRight/SemanticSegmentation]
PostProcessing=SemanticSegmentation
  • Start Carla client: ./client_example.py --images-to-disk --carla-settings ~/sims/carla/Example.CarlaSettings.ini --autopilot
  • -i, --images-to-disk
  • --carla-settings=...ini to specify configuration file. Same file as for the server execution.
  • --autopilot to enable preconfigured on-road navigation
  • -h, --help for all parameters

Resulted images

Saved in folder, with predefined pattern (see client_example.py)

RGB

image_00127_rgb|720x512,25%

Depth (encoded)

image_00127_depth|720x512,25%

Semantic Segmentation (encoded)

image_00127segmentation|720x512,25%

Converted images

Use ./bin/image_converter for depth, segmentation images convertation: ./bin/image_converter -c semseg -i ~/sims/carla/PythonClient/_images/episode_000/CameraStereoLeft/SemanticSegmentation/ -o ~/sims/carla/PythonClient/_images/episode_000/CameraStereoLeft/SegmConverted

./bin/image_converter -c logdepth -i ~/sims/carla/PythonClient/_images/episode_000/CameraStereoLeft/Depth/ -o ~/sims/carla/PythonClient/_images/episode_000/CameraStereoLeft/DepthConverted/

RGB

image_00127_rgb|720x512,25%

Depth (logDepth from meters)

image_00127|720x512,25%

Semantic Segmentation (labeled)

image_00127|720x512,25%

#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
"""Basic CARLA client example."""
from __future__ import print_function
import argparse
import logging
import random
import time
from carla.client import make_carla_client
from carla.sensor import Camera
from carla.settings import CarlaSettings
from carla.tcp import TCPConnectionError
from carla.util import print_over_same_line
def run_carla_client(host, port, autopilot_on, save_images_to_disk, image_filename_format, settings_filepath):
# Here we will run 3 episodes with 300 frames each.
number_of_episodes = 3
frames_per_episode = 300
# We assume the CARLA server is already waiting for a client to connect at
# host:port. To create a connection we can use the `make_carla_client`
# context manager, it creates a CARLA client object and starts the
# connection. It will throw an exception if something goes wrong. The
# context manager makes sure the connection is always cleaned up on exit.
with make_carla_client(host, port) as client:
print('CarlaClient connected')
for episode in range(0, number_of_episodes):
# Start a new episode.
if settings_filepath is None:
# Create a CarlaSettings object. This object is a wrapper around
# the CarlaSettings.ini file. Here we set the configuration we
# want for the new episode.
settings = CarlaSettings()
settings.set(
SynchronousMode=True,
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=20,
NumberOfPedestrians=40,
WeatherId=random.choice([1, 3, 7, 8, 14]))
settings.randomize_seeds()
# Now we want to add a couple of cameras to the player vehicle.
# We will collect the images produced by these cameras every
# frame.
# The default camera captures RGB images of the scene.
camera0 = Camera('CameraRGB')
# Set image resolution in pixels.
camera0.set_image_size(800, 600)
# Set its position relative to the car in centimeters.
camera0.set_position(30, 0, 130)
settings.add_sensor(camera0)
# Let's add another camera producing ground-truth depth.
camera1 = Camera('CameraDepth', PostProcessing='Depth')
camera1.set_image_size(800, 600)
camera1.set_position(30, 0, 130)
settings.add_sensor(camera1)
else:
# Alternatively, we can load these settings from a file.
with open(settings_filepath, 'r') as fp:
settings = fp.read()
# Now we load these settings into the server. The server replies
# with a scene description containing the available start spots for
# the player. Here we can provide a CarlaSettings object or a
# CarlaSettings.ini file as string.
scene = client.load_settings(settings)
# Choose one player start at random.
number_of_player_starts = len(scene.player_start_spots)
player_start = random.randint(0, max(0, number_of_player_starts - 1))
# Notify the server that we want to start the episode at the
# player_start index. This function blocks until the server is ready
# to start the episode.
print('Starting new episode...')
client.start_episode(player_start)
# Iterate every frame in the episode.
for frame in range(0, frames_per_episode):
# Read the data produced by the server this frame.
measurements, sensor_data = client.read_data()
# Print some of the measurements.
print_measurements(measurements)
# Save the images to disk if requested.
if save_images_to_disk:
for name, image in sensor_data.items():
image.save_to_disk(image_filename_format.format(episode, name, frame))
# We can access the encoded data of a given image as numpy
# array using its "data" property. For instance, to get the
# depth value (normalized) at pixel X, Y
#
# depth_array = sensor_data['CameraDepth'].data
# value_at_pixel = depth_array[Y, X]
#
# Now we have to send the instructions to control the vehicle.
# If we are in synchronous mode the server will pause the
# simulation until we send this control.
if not autopilot_on:
client.send_control(
steer=random.uniform(-1.0, 1.0),
throttle=0.5,
brake=0.0,
hand_brake=False,
reverse=False)
else:
# Together with the measurements, the server has sent the
# control that the in-game autopilot would do this frame. We
# can enable autopilot by sending back this control to the
# server. We can modify it if wanted, here for instance we
# will add some noise to the steer.
control = measurements.player_measurements.autopilot_control
control.steer += random.uniform(-0.1, 0.1)
client.send_control(control)
def print_measurements(measurements):
number_of_agents = len(measurements.non_player_agents)
player_measurements = measurements.player_measurements
message = 'Vehicle at ({pos_x:.1f}, {pos_y:.1f}), '
message += '{speed:.2f} km/h, '
message += 'Collision: {{vehicles={col_cars:.0f}, pedestrians={col_ped:.0f}, other={col_other:.0f}}}, '
message += '{other_lane:.0f}% other lane, {offroad:.0f}% off-road, '
message += '({agents_num:d} non-player agents in the scene)'
message = message.format(
pos_x=player_measurements.transform.location.x / 100, # cm -> m
pos_y=player_measurements.transform.location.y / 100,
speed=player_measurements.forward_speed,
col_cars=player_measurements.collision_vehicles,
col_ped=player_measurements.collision_pedestrians,
col_other=player_measurements.collision_other,
other_lane=100 * player_measurements.intersection_otherlane,
offroad=100 * player_measurements.intersection_offroad,
agents_num=number_of_agents)
print_over_same_line(message)
def main():
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
'-v', '--verbose',
action='store_true',
dest='debug',
help='print debug information')
argparser.add_argument(
'--host',
metavar='H',
default='localhost',
help='IP of the host server (default: localhost)')
argparser.add_argument(
'-p', '--port',
metavar='P',
default=2000,
type=int,
help='TCP port to listen to (default: 2000)')
argparser.add_argument(
'-a', '--autopilot',
action='store_true',
help='enable autopilot')
argparser.add_argument(
'-i', '--images-to-disk',
action='store_true',
help='save images to disk')
argparser.add_argument(
'-c', '--carla-settings',
metavar='PATH',
default=None,
help='Path to a "CarlaSettings.ini" file')
args = argparser.parse_args()
log_level = logging.DEBUG if args.debug else logging.INFO
logging.basicConfig(format='%(levelname)s: %(message)s', level=log_level)
logging.info('listening to server %s:%s', args.host, args.port)
while True:
try:
run_carla_client(
host=args.host,
port=args.port,
autopilot_on=args.autopilot,
save_images_to_disk=args.images_to_disk,
image_filename_format='_images/episode_{:0>3d}/{:s}/image_{:0>5d}.png',
settings_filepath=args.carla_settings)
print('Done.')
except TCPConnectionError as error:
logging.error(error)
time.sleep(1)
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print('\nCancelled by user. Bye!')
; Example of settings file for CARLA.
;
; Use it with `./CarlaUE4.sh -carla-settings=Path/To/This/File`.
[CARLA/Server]
; If set to false, a mock controller will be used instead of waiting for a real
; client to connect.
UseNetworking=true
; Ports to use for the server-client communication. This can be overridden by
; the command-line switch `-world-port=N`, write and read ports will be set to
; N+1 and N+2 respectively.
WorldPort=2000
; Time-out in milliseconds for the networking operations.
ServerTimeOut=10000
; In synchronous mode, CARLA waits every frame until the control from the client
; is received.
SynchronousMode=true
; Send info about every non-player agent in the scene every frame, the
; information is attached to the measurements message. This includes other
; vehicles, pedestrians and traffic signs. Disabled by default to improve
; performance.
SendNonPlayerAgentsInfo=false
[CARLA/LevelSettings]
; Path of the vehicle class to be used for the player. Leave empty for default.
; Paths follow the pattern "/Game/Blueprints/Vehicles/Mustang/Mustang.Mustang_C"
PlayerVehicle=
; Number of non-player vehicles to be spawned into the level.
NumberOfVehicles=15
; Number of non-player pedestrians to be spawned into the level.
NumberOfPedestrians=30
; Index of the weather/lighting presets to use. If negative, the default presets
; of the map will be used.
WeatherId=1
; Seeds for the pseudo-random number generators.
SeedVehicles=123456789
SeedPedestrians=123456789
; [CARLA/SceneCapture]
; Names of the cameras to be attached to the player, comma-separated, each of
; them should be defined in its own subsection. E.g., Uncomment next line to add
; a camera called MyCamera to the vehicle
; Cameras=MyCamera
; Now, every camera we added needs to be defined it in its own subsection.
; [CARLA/SceneCapture/MyCamera]
; Post-processing effect to be applied. Valid values:
; * None No effects applied.
; * SceneFinal Post-processing present at scene (bloom, fog, etc).
; * Depth Depth map ground-truth only.
; * SemanticSegmentation Semantic segmentation ground-truth only.
; SceneFinal=PostProcessing
; Size of the captured image in pixels.
; ImageSizeX=800
; ImageSizeY=600
; Camera (horizontal) field of view in degrees.
; CameraFOV=90
; Position of the camera relative to the car in centimeters.
; CameraPositionX=15
; CameraPositionY=0
; CameraPositionZ=123
; Rotation of the camera relative to the car in degrees.
; CameraRotationPitch=8
; CameraRotationRoll=0
; CameraRotationYaw=0
; Stereo setup example:
;
[CARLA/SceneCapture]
SceneFinal=PostProcessing
Cameras=CameraStereoLeft/RGB,CameraStereoLeft/Depth,CameraStereoLeft/SemanticSegmentation,CameraStereoRight/RGB,CameraStereoRight/Depth,CameraStereoRight/SemanticSegmentation
ImageSizeX=720
ImageSizeY=512
CameraFOV=90
[CARLA/SceneCapture/CameraStereoLeft]
CameraPositionX=170
CameraPositionY=-30
CameraPositionZ=150
CameraRotationPitch=0a
CameraRotationRoll=0
CameraRotationYaw=0
[CARLA/SceneCapture/CameraStereoLeft/RGB]
PostProcessing=SceneFinal
[CARLA/SceneCapture/CameraStereoLeft/Depth]
PostProcessing=Depth
; Add semantics
[CARLA/SceneCapture/CameraStereoLeft/SemanticSegmentation]
PostProcessing=SemanticSegmentation
[CARLA/SceneCapture/CameraStereoRight]
CameraPositionX=170
CameraPositionY=30
CameraPositionZ=150
CameraRotationPitch=0
CameraRotationRoll=0
CameraRotationYaw=0
[CARLA/SceneCapture/CameraStereoRight/RGB]
PostProcessing=SceneFinal
[CARLA/SceneCapture/CameraStereoRight/Depth]
PostProcessing=Depth
; Add semantics
[CARLA/SceneCapture/CameraStereoRight/SemanticSegmentation]
PostProcessing=SemanticSegmentation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment