Final Project
Final Project: Autonomous Driving with Duckietown
Introduction
The main goal of this lab project is to perform autonomous lane following based on various conditions of the road (walking pedestrians, obstacles in the way, etc) and different goals of the driver (which turns to make, which parking lot to park in).
Overall, working with the DuckieBots is quite slow. The bots themselves have many Docker containers running which greatly inhibits the bot’s performance. We found that many of these containers did not serve a particularly useful purpose for us. Because of this, we shut down many containers that were not needed. For example, we shut off the container running Portainer. Then, instead of using the DuckieTown Shell (dts) to build our programs, we simply used ssh to access the bot, and developed our programs directly in a docker container using catkin.
Furthermore, due to the significant overhead imposed by Python, we wrote this project in C++, which significantly improved the performance of the program running on the bot. Furthermore, C++ provides more efficient and lower-level tools for controlling the Duckiebot compared to Python. We need to use libraries such as ros, cvbridge, and apriltag detection, but these are all available in C++.
The project code consists of three directories:
Control_command_executer
This part defines a LaneControl class, used to control the robot based on its state. This class directly controls the bot’s wheels. The module is very lightweight and runs at 30Hz. When no control command is currently available, this class will stop the bot. This functionality ensures that the bot does not run blindly if control commands are not available, or other nodes have been shut down or are experiencing high latency.
Duckietown_msgs
This part contains details about the type of messages used in the project. It is included and used in the main part of the project
Lane_control
This is the main node which determines the control commands to execute in different stages of the lab. There is another `LaneControl` class which implements two important functionalities:
ImageCallback Method
In summary, this method takes in the image message, and decides which state the bot is in, then sets the control command values based on the state. The details of each state and its corresponding commands will be provided in the next sections.
State Change
The control commands created here are forwarded to the `LaneControl` node in the `control_command_executer` package to be actually executed.
Strategy for Stage 1: Apriltag Detection and Lane Following
Here, we discuss our strategy for both apriltag detection and lane following.
In order to complete both tasks, we grab the current image from the camera and
preprocess this image by:
Converting the camera image to a `bgr` OpenCV image
Cropping the image to consider only the middle of the image
auto start = high_resolution_clock::now();
cv::Mat img = cv_bridge::toCvShare(msg, "bgr8")->image;
cv::Mat cropped_image;
cropped_image = img(cv::Range(230, 480), cv::Range(0, 640));
cv::Mat cropped_image_above = img(cv::Range(0, 250), cv::Range(110, 530));
cv::Mat crop_bot_detection = img(cv::Range(150, 250), cv::Range(150, 500));
Then, we use the apriltag library to detect the apriltags:
cv::Mat blackandwhite;
cv::cvtColor(cropped_image_above, blackandwhite, cv::COLOR_BGR2GRAY);
image_u8_t img_header = {.width = blackandwhite.cols,
.height = blackandwhite.rows,
.stride = blackandwhite.cols,
.buf = blackandwhite.data};
zarray_t *detections =
apriltag_detector_detect(LaneControl::td, &img_header);
Based on the tag id of the detected apriltag, we can determine the state we are in.
for (int i = 0; i < zarray_size(detections); i++) {
apriltag_detection_t *det;
zarray_get(detections, i, &det);
if (CURRENT_STATE == PARKING_STATE) {
if (int(det->id) == TARGET_PARK) {
x_avg = std::abs((float(det->p[0][0]) + float(det->p[1][0])) / 2);
std::cout << float(det->p[0][0]) << " " << float(det->p[1][0])
<< std::endl;
// std::cout << "X_avg = " << x_avg << std::endl;
}
} else if (CURRENT_STATE == PARKING_LOT_ENTRANCE_STATE) {
if (int(det->id) == PARKING_ENTRANCE_TAG) {
x_avg = std::abs((float(det->p[0][0]) + float(det->p[1][0])) / 2)
std::cout << float(det->p[0][0]) << " " << float(det->p[1][0])
<< std::endl;
float dif = std::abs(det->p[1][1] - det->p[3][1]);
float dif2 = std::abs(det->p[1][0] - det->p[3][0]);
float area = dif * dif2;
if (area > 3500) {
if(TARGET_PARK == PARK_FAR_LEFT)
change_state(ROTATE_STATE);
else if(TARGET_PARK == PARK_FAR_RIGHT)
change_state(ROTATE_RIGHT_STATE);
}
if(area > 2200){
if(TARGET_PARK == PARK_CLOSE_LEFT)
change_state(ROTATE_STATE);
else if(TARGET_PARK == PARK_CLOSE_RIGHT)
change_state(ROTATE_RIGHT_STATE);
}
std::cout << "X_avg = " << x_avg << std::endl;
}
} else if (CURRENT_STATE == ROTATE_STATE || CURRENT_STATE == ROTATE_RIGHT_STATE) {
if (int(det->id) == TARGET_PARK) {
change_state(PARKING_STATE);
}
} else {
float dif = std::abs(det->p[1][1] - det->p[3][1]);
float dif2 = std::abs(det->p[1][0] - det->p[3][0]);
float area = dif * dif2;
if ((ros::Time::now() - LaneControl::last_detection).toSec() > 2) {
if (area > 3000) {
if (int(det->id) == CROSSING)
LaneControl::change_state(STOP_STATE);
else if (int(det->id) == STOP_BEFORE_PARKING_TAG)
LaneControl::change_state(PARKING_LOT_ENTRANCE_STATE);
else if (int(det->id) == LEFT) {
LaneControl::change_state(LEFT_TURN_STATE);
} else if (int(det->id) == STRAIGHT) {
std::cout << "Changing state to straight\n";
LaneControl::change_state(GO_STRAIGH_STATE);
} else if (int(det->id) == RIGHT) {
std::cout << "Changing state to right\n";
// LaneControl::change_state(RIGHT_TURN_STATE);
}
LaneControl::last_detection = ros::Time::now();
}
}
}
}
}
Strategy for Stage 2: Obstacle Avoidance
Bot Avoidance
In order to avoid the crashed robot in the obstacle course for this lab, we use colour masking. We convert the camera image to HSV format and consider only the blue colours in the image, using colour masking, to determine if there is a crashed bot on the side of the road. We can also gain information about where the yellow and the white lane delimiters are using colour masking.
If the crashed bot is detected, we change the state of our Duckiebot to “BOT_AVOIDANCE_STATE”::
if (bot_total > 0) {
bot_total /= convert_to_HSV_up.rows * convert_to_HSV_up.cols;
if (bot_total > 0.04 && CURRENT_STATE != BOT_AVOIDANCE_STATE) {
change_state(BOT_AVOIDANCE_STATE);
}
}
Then, we determine what our robot should do (go left, right, or straight) in this state. We base this decision on the position of our robot relative to the yellow lane delimiter in the middle of the road (we later use the boolean values of left, right, and straight, to indicate the control commands).
if (LaneControl::CURRENT_STATE == BOT_AVOIDANCE_STATE) {
if (yellow_centroid < 0.86)
left = true;
else if (yellow_centroid > 0.80)
right = true;
else
straight = true;
} else {
if (yellow_centroid < 0.14)
left = true;
else if (yellow_centroid > 0.20)
right = true;
else
straight = true;
}
Here is how we change the control commands based on left, right, and straight:
if (right) {
control_msg.vel_right = 0.15;
control_msg.vel_left = 0.4;
} else if (left) {
control_msg.vel_right = 0.4;
control_msg.vel_left = 0.15;
} else {
control_msg.vel_right = 0.4;
control_msg.vel_left = 0.4;
}
Passenger Avoidance and Crosswalks
To avoid pedestrians in crosswalks, we perform a similar process to that described above. We detect ducks in the camera image using colour masking. If there are ducks in the image, then we stop the bot. We continue to detect ducks in the camera image. Once the ducks have left the crosswalk, we will no longer detect any ducks in the image, at which point we can continue driving.
if (duck_total > 0) {
duck_total /= cropped_image.rows * cropped_image.cols;
if (duck_total > 0.009) {
change_state(STOP_STATE);
}
}
Strategy for Stage 3: The Parking Lot
Conceptually, parking is similar to the procedures implemented in the previous stages. We detect the apriltag in the parking lot and determine which stall to park in based on this apriltag:
PARK_FAR_LEFT
PARK_FAR_RIGHT
PARK_CLOSE_LEFT
PARK_CLOSE_RIGHT
Each parking stall has an apriltag at its end. We use this apriltag to localize the parking stall to park in. The bot also uses the apriltag to determine its approximate position in the parking stall. The bot continues trying to park, slowly driving nearer to the apriltag, until it can no longer see the apriltag at the end of the stall. At this point, the bot stops.
} else if (CURRENT_STATE == PARKING_LOT_ENTRANCE_STATE) {
if (int(det->id) == PARKING_ENTRANCE_TAG) {
x_avg = std::abs((float(det->p[0][0]) + float(det->p[1][0])) / 2);
std::cout << float(det->p[0][0]) << " " << float(det->p[1][0])
<< std::endl;
float dif = std::abs(det->p[1][1] - det->p[3][1]);
float dif2 = std::abs(det->p[1][0] - det->p[3][0]);
float area = dif * dif2;
if (area > 3500) {
if(TARGET_PARK == PARK_FAR_LEFT)
change_state(ROTATE_STATE);
else if(TARGET_PARK == PARK_FAR_RIGHT)
change_state(ROTATE_RIGHT_STATE);
}
if(area > 2200){
if(TARGET_PARK == PARK_CLOSE_LEFT)
change_state(ROTATE_STATE);
else if(TARGET_PARK == PARK_CLOSE_RIGHT)
change_state(ROTATE_RIGHT_STATE);
}
std::cout << "X_avg = " << x_avg << std::endl;
}
What We Could Do Differently
For the obstacle avoidance part, we could have followed different strategies. While the current hardcoded policy is stable for this scenario, we could have used neural networks to generalize to situations where the crashed bot is located in other different parts of the duckietown. However, this requires a lot of collected data for training the neural network, which makes it very costly, and not worth it in a case where the crashed bot position does not change.
Due to latency of communication between ROS nodes, we’ve implemented nearly all the functionality in a single ROS node. It may be better to split the implementation across nodes for multiple reasons. For example, if each ROS node has a single job, code maintenance would be simplified.
Breakdown
Khurram implemented the code.
Khurram, Samuel, and Golnaz all tested the code.
Samuel generated colour masks
Khurram, Samuel, and Golnaz wrote the report.