Skip to content

Instantly share code, notes, and snippets.

@under0tech
Last active May 23, 2025 05:47
Show Gist options
  • Save under0tech/6360ffea697f2f2f9f8d0d5b70148ef0 to your computer and use it in GitHub Desktop.
Save under0tech/6360ffea697f2f2f9f8d0d5b70148ef0 to your computer and use it in GitHub Desktop.
Video overlay for FPV drone with Computer Vision based Autopilot
import cv2
import numpy as np
# definitions
video_source = 0
video_full = 1
video_message_limit = 60
camera_width = 720
camera_height = 480
camera_fps = 1
# state
state = {
'bee_state' : 'OFF', # OFF, READY, ...
'rssi':0,
'rssi_msg':'Strong signal',
'frame': {},
'video_msg': '[Manual control is ON]',
'video_msg_countdown':0
}
# Autopilot's overlay
def draw_rc_auto_status(frame):
color_green = (0, 255, 0)
color_red = (0, 0, 255)
color = color_green if state['rssi_msg'] == 'Strong signal' else color_red
cv2.circle(frame, (50, 50), 7, color, -1)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, "RC", (65, 55), font, 0.5, color, 2)
if state['bee_state'] == 'OFF':
cv2.putText(frame, 'MANUAL', (110, 55), font, 0.5, color_red, 1)
else:
cv2.putText(frame, 'AUTO', (110, 55), font, 0.5, color_green, 2)
def draw_dotted_line(frame, start, end, color, thickness, gap):
x1, y1 = start
x2, y2 = end
length = int(np.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2))
for i in range(0, length, gap * 2):
start_x = int(x1 + (x2 - x1) * i / length)
start_y = int(y1 + (y2 - y1) * i / length)
end_x = int(x1 + (x2 - x1) * (i + gap) / length)
end_y = int(y1 + (y2 - y1) * (i + gap) / length)
cv2.line(frame, (start_x, start_y), (end_x, end_y), color, thickness)
def draw_cross_target(frame):
color_white = (255, 255, 255)
height, width, _ = frame.shape
center_x, center_y = width // 2, height // 2
draw_dotted_line(frame, (center_x - 50, center_y),
(center_x + 50, center_y), color_white, 2, 5)
draw_dotted_line(frame, (center_x, center_y - 50),
(center_x, center_y + 50), color_white, 2, 5)
def draw_scaled_target(frame):
color_white = (255, 255, 255)
rect_size = 50
height, width, _ = frame.shape
center_x, center_y = width // 2, height // 2
top_left_x = center_x - rect_size // 2
top_left_y = center_y - rect_size // 2
center_region = frame[top_left_y:top_left_y + rect_size,
top_left_x:top_left_x + rect_size]
scaled_region = cv2.resize(center_region, (rect_size * 2, rect_size * 2),
interpolation=cv2.INTER_LINEAR)
overlay_x_start = width - rect_size * 2 - 20
overlay_y_start = 20
frame[overlay_y_start:overlay_y_start + rect_size * 2,
overlay_x_start:overlay_x_start + rect_size * 2] = scaled_region
cv2.rectangle(frame, (overlay_x_start, overlay_y_start),
(overlay_x_start + rect_size * 2, overlay_y_start + rect_size * 2), color_white, 1)
def draw_video_message(frame):
font = cv2.FONT_HERSHEY_SIMPLEX
color_white = (256, 256, 256)
if state['video_msg'] != '':
cv2.putText(frame, state['video_msg'], (43, 80), font, 0.5, color_white, 1)
countdown = int(state['video_msg_countdown'])
if countdown < video_message_limit:
state['video_msg_countdown'] = countdown + 1
else:
state['video_msg'] = ''
state['video_msg_countdown'] = 0
# Main function
def main():
cap = cv2.VideoCapture(video_source)
if not cap.isOpened():
print("Error: Could not access the camera.")
exit()
cap.set(cv2.CAP_PROP_FRAME_WIDTH, camera_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, camera_height)
cap.set(cv2.CAP_PROP_FPS, camera_fps)
if video_full:
cv2.namedWindow("BEE", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("BEE", cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
print("Press 'q' to exit.")
while True:
ret, frame = cap.read()
if not ret:
print("Error: Could not read frame.")
break
# Save current frame to state for
# Computer Vision tasks
state['frame'] = frame
draw_rc_auto_status(frame)
draw_scaled_target(frame)
draw_cross_target(frame)
draw_video_message(frame)
cv2.imshow('BEE', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
main()
@under0tech
Copy link
Author

under0tech commented Jan 9, 2025

This video module could be useful as part of an autopilot system, such as the "BEE" autopilot with target-following capabilities, designed for an FPV combat drone, but for real-world implementation on a companion computer using a Raspberry Pi (RPi).

video_overlay

rpi_3_and_video_module_schema

Explore the ultimate guide on this topic - How to build the Eyes of an Autopilot for FPV Combat Drone.

@lomaka1313
Copy link

You can use the direct AV output, just look up how it can be configured for PI 4, as far as I remember it has to do with the config.txt file

https://www.raspberrypi.com/documentation/computers/config_txt.html#composite-video-mode

@under0tech
Copy link
Author

You can use the direct AV output, just look up how it can be configured for PI 4, as far as I remember it has to do with the config.txt file

https://www.raspberrypi.com/documentation/computers/config_txt.html#composite-video-mode

Haha, that was actually the first thing I tried while exploring this function.
Unfortunately, it doesn’t work in this setup.

If you've had better results in your environment based on your own experiments, please consider sharing the complete schema and configuration instructions here — it would be really helpful to many.

@hiarmen
Copy link

hiarmen commented May 12, 2025

Somehow I am unable to use this along with msp override. When I get functional visual, I lose msp override. And vise versa: when I get functional override, I lose visual.
msp-helper.py and 24_video_module.py work well when run standalone [out of systemd], but when run in systemd it's only either-either [not simultaneously].

@under0tech
Copy link
Author

Somehow I am unable to use this along with msp override. When I get functional visual, I lose msp override. And vise versa: when I get functional override, I lose visual. msp-helper.py and 24_video_module.py work well when run standalone [out of systemd], but when run in systemd it's only either-either [not simultaneously].

Hi @hiarmen,
It’s possible the issue is related to the USB connections. Since you're using two USB ports — one for the AV2USB dongle and another for the MSP connection with the flight controller — the problem might be stemming from that setup. Double check which port you are using in MSP connection :)

@hiarmen
Copy link

hiarmen commented May 13, 2025

Thanks Under0tech. I am using only one USB port - for AV2USB. Connection to FC goes from GPIO. Here is my setup diagram:
Bee_ept_Setup

@hiarmen
Copy link

hiarmen commented May 13, 2025

For system I use [autopilot_bee_ept + 24_video_module.py]. Separately both work perfectly: override moves channels and video feed just like in you screenshot. But as soon as I start it as a system - only override works, no visual. Maybe I am missing something for video module integration into [autopilot_bee_ept] ...?

@hiarmen
Copy link

hiarmen commented May 23, 2025

Assembled and tested. Confirm video feed on standalone HDMI display. Unfortunately did not work with FC. Resulted flickering image on goggles. Maybe not compatible HDMI dongle...
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment