- Key Features
- Technologies and Tools
- Project Workflow
- Step-by-Step Explanation
- Example Output
- Applications
- Supporting Materials
- Keywords
- Real-Time Lane Detection: Detects lane lines on road frames in real-time.
- Frame Masking: Filters irrelevant information to focus on the region of interest.
- Hough Transform: Detects and highlights lane lines accurately.
- Dynamic Smoothing: Reduces noise and ensures lane detection is smooth and continuous over frames.
- Canny Edge Detection: Accurately identifies edges in images to assist in lane line detection.
- Programming Language: Python
- Libraries:
- OpenCV: For computer vision operations.
- NumPy: For numerical computations.
- MoviePy: For video processing and visualization.
- Matplotlib: For plotting and debugging.
- Input processing of video or image frames.
- Define a region of interest (ROI) to filter unnecessary parts of the frame.
- Detect edges using Gaussian blur and Canny edge detection.
- Apply the Hough Line Transform to detect lane lines.
- Smooth lane line detection across frames for continuity.
- Overlay the detected lane lines on the original frame.
- Objective: Load a video or series of road images to process frame by frame.
- Implementation:
- Convert the frame to grayscale using
cv2.cvtColor. - Extract relevant color ranges (e.g., yellow and white for lane lines) using HSV masks.
- Convert the frame to grayscale using
- Objective: Focus on the road section where lane lines are likely to be present.
- *Implementation:
- Define a polygol region (ROI) based on road geometry.
- Apply masking to keep only the ROI using
cv2.bitwise_and.
- Objectve: Identify the edges in the ROI that represent potential lane boundaries.
- Implementation:
- Apply Gaussian blur to reduce noise in the frame.
- Use Canny edge detection to highlight lane line edges.
- Objective: Detect and segment lane lines from other edges.
- Implementation:
- Use the Hough Line Transform to detect straight lines.
- Separate lines into left and right lanes based on slope.
- Objective: Ensure the detected lane lines are smooth and consistent across frames.
- Implementation:
- Use a weighted average of previous and current frame detections to smooth lane lines.
- Objective: Overlay the detected lane lines on the original frame for visualization.
- Implementation:
- Draw lane lines using
cv2.line. - Combine the lane lines with the original frame using
cv2.addWeighted.
- Draw lane lines using
The output is a video or frame sequence with lane lines clearly overlaid on the original road footage. Below is an example of lane detection in action:
- Input: Video of a road captured from a dashboard camera.
- Output: The lane lines are dynamically highlighted in real-time.
- Autonomous vehicles for lane guidance and navigation.
- Driver-assistance systems to alert drivers about lane deviations.
- Research and development in intelligent transportation systems.
- Introduction to Lane Detection with OpenCV
- Understanding the Hough Transform
- Canny Edge Detection Explained
Python, OpenCV, Computer Vision, Lane Detection, Self-Driving Cars, Hough Transform, Canny Edge Detection, Real-Time Processing
A GUI-based application to showcase real-time lane detection. Using Tkinter for the GUI and OpenCV for video processing, it demonstrates input and output video streams for lane-line detection in a user-friendly interface.
- Key Features
- Technologies and Tools
- Project Workflow
- Step-by-Step Explanation
- Example Output
- Applications
- Supporting Materials
- Real-Time Display: Simultaneously displays input and output video streams side by side in the application window.
- Dynamic Resizing: Adjusts video frame sizes to fit the GUI layout.
- User-Friendly GUI: Built using Tkinter, providing an intuitive interface with a title, logo, and Quit button.
- Programming Language: Python
- Libraries:
- Import necessary libraries and set up the global variables for video streams.
- Design the GUI using Tkinter and embed the application logo, title, and placeholders for video streams.
- Integrate OpenCV to process and display the input and processed video streams in real-time.
- Add functionality for exiting the application with a Quit button.
- Objective: Create a Tkinter-based application window to hold all components like video displays and buttons.
- Implementation:
- Use
Tk()to create the main window and set the title and dimensions. - Add a heading and a logo at the top using
Label(). - Use
pack()to organize elements within the window.
- Use
root = tk.Tk()
img = ImageTk.PhotoImage(Image.open("logo.png"))
heading = Label(root, image=img, text="Lane-Line Detection")
heading.pack()
heading2 = Label(root, text="Lane-Line Detection", pady=20, font=('arial', 45, 'bold'))
heading2.pack() - Objective: Load an input video stream and display it in the GUI.
- Implementation:
- Open the video stream using OpenCV's
VideoCapture(). - Read frames from the video and resize them using
cv2.resize(). - Convert the frames from BGR to RGB and display them in the Tkinter window using
ImageTk.PhotoImage().
- Open the video stream using OpenCV's
cap1 = cv2.VideoCapture("./input2.mp4")
def show_vid():
flag1, frame1 = cap1.read()
frame1 = cv2.resize(frame1, (600, 500))
if flag1:
pic = cv2.cvtColor(frame1, cv2.COLOR_BGR2RGB)
img = Image.fromarray(pic)
imgtk = ImageTk.PhotoImage(image=img)
lmain.imgtk = imgtk
lmain.configure(image=imgtk)
lmain.after(10, show_vid) - Objective: Show the output video stream with lane detection applied.
- Implementation:
- Open the processed video using another
VideoCapture()object. - Follow the same steps as the input video for resizing and display.
- Open the processed video using another
cap2 = cv2.VideoCapture("./output2.mp4")
def show_vid2():
flag2, frame2 = cap2.read()
frame2 = cv2.resize(frame2, (600, 500))
if flag2:
pic2 = cv2.cvtColor(frame2, cv2.COLOR_BGR2RGB)
img2 = Image.fromarray(pic2)
img2tk = ImageTk.PhotoImage(image=img2)
lmain2.img2tk = img2tk
lmain2.configure(image=img2tk)
lmain2.after(10, show_vid2) - Objective: Provide a way to exit the application.
- Implementation:
- Add a
Button()widget with thecommandset toroot.destroy. - Position the button using
pack()with theside=BOTTOMparameter.
- Add a
exitbutton = Button(root, text='Quit', fg="red", command=root.destroy).pack(side=BOTTOM) The application window will display the input video on the left and the processed output video with detected lane lines on the right.
- Showcasing lane-line detection projects in a GUI format.
- Educational tools for computer vision and GUI programming.
- Prototyping for autonomous driving systems.