Select the camera type. 'PICAM': Raspberry Pi Camera (CSI) 'WEBCAM': USB Camera 'CVCAM': OpenCV Camera (often same as WEBCAM) 'CSIC': High-speed CSI camera (e.g. Arducam) 'D435': Intel Realsense D435 'OAKD': Luxonis OAK-D 'MOCK': Simulation/Testing or when using GPS path following

Are you using a PCA9685 servo driver board?
Enable the SSD1306 OLED display (small screen on the car).

The type of controller being used. Options: 'ps3', 'ps4', 'xbox', 'nimbus', 'wiiu', 'F710', 'rc3', 'MM1 (use for RC Hat)', 'custom'

These options specify which chasis and motor setup you are using. See Actuators documentation https://docs.donkeycar.com/parts/actuators/ for a detailed explanation of each drive train type and it's configuration. Choose one of the following and then update the related configuration section: "PWM_STEERING_THROTTLE" uses two PWM output pins to control a steering servo and an ESC, as in a standard RC car. "MM1" Robo HAT MM1 board or the Donkeycar RC Hat (https://www.diyrobocars.com/product/rc-hat/) "SERVO_HBRIDGE_2PIN" Servo for steering and HBridge motor driver in 2pin mode for motor "SERVO_HBRIDGE_3PIN" Servo for steering and HBridge motor driver in 3pin mode for motor "DC_STEER_THROTTLE" uses HBridge pwm to control one steering dc motor, and one drive wheel motor "DC_TWO_WHEEL" uses HBridge in 2-pin mode to control two drive motors, one on the left, and one on the right. "DC_TWO_WHEEL_L298N" using HBridge in 3-pin mode to control two drive motors, one of the left and one on the right. "MOCK" no drive train. This can be used to test other features in a test rig. "VESC" VESC Motor controller to set servo angle and duty cycle
Select your drive train configuration

The default AI framework to use
The DEFAULT_MODEL_TYPE will choose which model will be created at training time. This chooses between different neural network designs. You can override this setting by passing the command line parameter --type to the python manage.py train and drive commands.

--'linear': Standard regression (predicts steer/throttle floats).
--'categorical': Classification (bins steer/throttle into categories).
--'resnet18': Pytorch heavy model

Tensorflow models: (linear|categorical|tflite_linear|tensorrt_linear)
Pytorch models: (resnet18).
How many records to use when doing one pass of gradient descent. Use a smaller number if your GPU is running out of memory.
What percent of records to use for training. The remaining used for validation.
How many times to visit all records of your data
Would you like to see a pop up display of final loss?
Would you like to see a progress bar with text during training?
Would you like to stop the training if we see it's not improving fit?
How many epochs to wait before no improvement
Early stop will want this much loss change before calling it improved.
Print layers and weights to stdout
adam, sgd, rmsprop, etc.. None accepts default
Only used when OPTIMIZER specified
Only used when OPTIMIZER specified
How to store images during training: 'ARRAY' (faster), 'BINARY', or 'NOCACHE' (saves RAM).
Automatically create TFLite model for faster inference on Pi.
Automatically create tensorrt model in training
If old keras format should be used instead of savedmodel
Change to true to automatically send best model during training
Enable model pruning to remove weights and increase inference speed.

For the categorical model, this limits the upper bound of the learned throttle. It's very IMPORTANT that this value is matched from the training PC config.py and the robot.py and ideally wouldn't change once set.
Number of images in a sequence for RNN/3D models.
Model transfer options: When copying weights during a model transfer operation, should we freeze a certain number of layers to the incoming weights and not allow them to change during training?
Default False will allow all layers to be modified by training
When freezing layers, how many layers from the last should be allowed to train?
Settings for brightness and blur: Use 'MULTIPLY' and/or 'BLUR' in AUGMENTATIONS
This is interpreted as [-0.2, 0.2]
Blur range for augmentation (kernel size).
The number of rows of pixels to ignore on the top of the image
The number of rows of pixels to ignore on the bottom of the image
The number of rows of pixels to ignore on the right of the image
The number of rows of pixels to ignore on the left of the image
Pixel positions
Pixel positions
Pixel positions
Pixel positions
Pixel positions
Pixel positions
Canny edge detection low threshold value of intensity gradient
Canny edge detection high threshold value of intensity gradient
Canny edge detect aperture in pixels, must be odd; choices=[3, 5, 7]
Blur kernel horizontal size in pixels
Blur kernel vertical size in pixels or None for square kernel
Blur is gaussian if True, simple if False
Horizontal size in pixels
Vertical size in pixels
Horizontal scale factor
Vertical scale factor or None to maintain aspect ratio

ODOMETRY: Set to True if you have an encoder/odometer installed.
LIDAR: Set to True if you have a LIDAR (RP or YD).
TFMINI: Short range laser radar.
IMU: Inertial Measurement Unit (e.g. MPU6050).
SOMBRERO HAT: Enable if using the Sombrero Hat.
LEDS: RGB Status LED configuration.

the vehicle loop will pause if faster than this speed.
the vehicle loop can abort after this many iterations, when given a positive integer.

Show the image the pilot sees (with overlays) in the web UI.
Scale all AI throttle output by this multiplier.
When racing, to give the ai a boost, configure these values.
The ai will output throttle for this many seconds
The ai will output this throttle value
This keypress will enable this boost. It must be enabled before each use to prevent accidental trigger.
When False (default) you will need to hit the AI_LAUNCH_ENABLE_BUTTON for each use. This is safest. When True, is active on each trip into "local" ai mode.
When training the Behavioral Neural Network model, make a list of the behaviors, Set the TRAIN_BEHAVIORS = True, and use the BEHAVIOR_LED_COLORS to give each behavior a color
Behavior Cloning: Train different driving behaviors (e.g. lanes) based on state.
Localizer
The localizer is a neural network that can learn to predict its location on the track. This is an experimental feature that needs more development. But it can currently be used to predict the segment of the course, where the course is divided into NUM_LOCATIONS segments.
Localizer: Experimental location prediction.

The path will be saved to this filename
The path display will be scaled by this factor in the web page
255, 255 is the center of the map. This offset controls where the origin is displayed.
After travelling this distance (m), save a path point
Proportional mult for PID path follower
Integral mult for PID path follower
Differential mult for PID path follower
Constant throttle value during path following
Whether or not to use the constant throttle or variable throttle captured during path recording
Joystick button to save path
Joystick button to press to move car back to origin

Type: boolean

Automatically record data when throttle is > 0 (Standard training data collection).
Normally we do not record during ai mode. Set this to true to get image and steering records for your Ai. Be careful not to use them to train.
Create a new tub (tub_YY_MM_DD) directory when recording or append records to data directory directly
Console logging settings.
(Python logging level) 'NOTSET' / 'DEBUG' / 'INFO' / 'WARNING' / 'ERROR' / 'FATAL' / 'CRITICAL'
(Python logging format - https://docs.python.org/3/library/logging.html#formatter-objects)
Type: boolean
Type: boolean
Type: text_input

Type: boolean

Only on Ubuntu linux, you can use the simulator as a virtual donkey and issue the same python manage.py drive command as usual, but have them control a virtual car. This enables that, and sets the path to the simulator and the environment. You will want to download the simulator binary from: DonkeySimLinux.zip then extract that and modify DONKEY_SIM_PATH.
Settings for connecting to the Donkey Gym Unity simulator.

Type: text_input
Type: text_input

The port for the web server (default 8887).
Initial mode on startup. 'user': Human control 'local_angle': AI Steering, Human Throttle 'local': AI Steering and Throttle

Upload Existing Configuration

Upload your existing myconfig.py to populate the form with your current settings