Traffic signs detection (Python, OpenCV, Tensorflow). Approaches and possible solutions. Part 1.

17.06.2017
By

There is some approaches how to detect traffic signs (TS) at the image (some info available here and here - (thanks to Miki) and here):

1. Color-based detection

2. Shape-based detection

3. Pattern detection and recognition

Colors at the image could be different – depends of quality, camera sensor, backligth etc. If You want to use only color based detection You need to understand how to recognise some common colors used in traffic sign design – red, blue, yellow.Also You need to work with colored images.

Shape detection will be better in some situations – usually shape detection used with grayscale and black and white images.

One of the approach – use TS rim analysis exploiting Hough transform. Other approach – use HOG (Histogram Oriented Gradient) to detect rigid objects. Also it can be BFS (Breadth first search)technique , Edge detection technicue described here.

Good explanation how to use color detection technics is here.

Information how to implement this techniques is here

Good example is here (one of the best explanation) and here

If Your image set is very small it will be better to prepare some additional images with Keras. You can find information how to make new training dataset of images from scratch with keras.

Ok. Lets start with step by step explanation.

0. Video stream from camera captured image to image with OpenCV or any other  libraries. finally You need to recieve some set of images for recognition.

for example – image from RTSD-R1 database

1. How to find traffic sign at the image. We will use mix of color and shape based approach to detect region of interest (ROI) and some optimisation technics to skip not possible for traffic signs areas.

Lets look at the program and understand it structure:

"""
find traffic signs at the pictures
"""
# coding: utf-8

from __future__ import division # for puthon 2.7 to make result of division to be float
import time                     # for measure time of procedure execution
import numpy as np
import math
import cv2
import imutils
import skimage.transform

def FindImages(pic):
    """
    find traffic sign
    """
    #array of resized 32x32x3 images finded during search procedure
    images = []

    """ first step - preparing picture"""
    # read the picture
    image = cv2.imread(pic)

    # define dimensions of image and center
    # measurement of picture starts from top left corner
    height, width = image.shape[:2]
    #print(str(height)+" "+str(width))
    center_y = int(height/2)
    center_x = int(width/2)

    # define array of distance from center of image - it connected with area of contour
    # more distance from center - more bigger contour (look at the picture
    # test_1.jpg - 3 red squares shows this areas)
    dist_ = [center_x/3, center_x/2, center_x/1.5]

    # defining main interest zone of picture (left, right, top bottom borders)
    # this zone is approximate location of traffic sings
    # green zone at the picture test_1.jpg
    left_x = center_x - int(center_x*.7)
    right_x = width
    top_y = 0
    bottom_y = center_y + int(center_y*.3)
    # crop zone of traffic signs location to search only in it
    crop_image = image[top_y:bottom_y, left_x:right_x]
    #cv2.imshow('img0',crop_image)

    # make canny image - first image for recognition of shapes
    # look at the test_1_crop_canny.jpg
    canny = cv2.Canny(crop_image, 50, 240)
    blur_canny = cv2.blur(canny,(2,2))
    _,thresh_canny = cv2.threshold(blur_canny, 127, 255, cv2.THRESH_BINARY)

    # make color HSV image - second image for color mask recognition
    # Convert BGR to HSV
    hsv = cv2.cvtColor(crop_image, cv2.COLOR_BGR2HSV)

    # define the list of boundaries (lower and upper color space for
    # HSV
    # mask for red color consist from 2 parts (lower mask and upper mask)
    # lower mask (0-10)
    lower_red = np.array([0,50,50],np.uint8)
    upper_red = np.array([10,255,255],np.uint8)
    mask_red_lo = cv2.inRange(hsv, lower_red, upper_red)
    # upper mask (170-180)
    lower_red = np.array([160,50,50], np.uint8)
    upper_red = np.array([180,255,255], np.uint8)
    mask_red_hi = cv2.inRange(hsv, lower_red, upper_red)
    # blue color mask
    lower_blue=np.array([100,50,50],np.uint8)
    upper_blue=np.array([140,200,200],np.uint8)
    mask_blue = cv2.inRange(hsv, lower_blue, upper_blue)
    # yellow color mask
    lower_yellow=np.array([15,110,110],np.uint8)
    upper_yellow=np.array([25,255,255],np.uint8)
    mask_yellow = cv2.inRange(hsv, lower_yellow, upper_yellow)

    # join all masks
    # could be better to join yellow and red mask first  - it can helps to detect
    # autumn trees and delete some amount of garbage, but this is TODO next  
    mask = mask_red_lo+mask_red_hi+mask_yellow+mask_blue

    # find the colors within the specified boundaries and apply
    # the mask
    hsv_out = cv2.bitwise_and(hsv, hsv, mask = mask)

    # encrease brightness TODO later
    #h, s, v = cv2.split(hsv_out)
    #v += 50
    #bright_hsv_out = cv2.merge((h, s, v)) 

    #blurred image make lines from points and parts and increase quality (1-3,1-3) points
    blur_hsv_out = cv2.blur(hsv_out,(1,1)) # change from 1-3 to understand how it works

    # preparing HSV for countours - make gray and thresh
    gray = cv2.cvtColor(blur_hsv_out, cv2.COLOR_BGR2GRAY)
    # increasing intensity of finded colors with 0-255 value of threshold
    # look at the file test_1_hsv_binary to understand what the file thresh is
    _,thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)

    # do not need to mix the file - it will be problem with contour recognition
    #dst = cv2.addWeighted(canny,0.3,thresh,0.7,0)
    #cv2.imshow('img1',thresh_canny)
    #cv2.imshow('img2',thresh)
    #cv2.waitKey(0)

    """step two - searching for contours in prepared images"""
    #calculating of finded candidates
    multiangles_n=0

    # contours of the first image (thresh_canny)
    # cv2.RETR_TREE parameter shows all the contours internal and external
    image1,contours1,_= cv2.findContours(thresh_canny,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
    #print("Contours total at first image: "+str(len(contours1)))

    #take only first  biggest 15% of all elements
    #skipping small contours from tree branches etc.
    contours1 = sorted(contours1, key = cv2.contourArea, reverse = True)[:int(len(contours1)/6)]

    for cnt in contours1:
        # find perimeters of area - if it small and not convex - skipping
        perimeter = cv2.arcLength(cnt,True)
        if perimeter< 25 or cv2.isContourConvex=='False':#25 - lower - more objects higher-less
            continue

        #calculating rectangle parameters of contour
        (x,y),(w,h),angle = cv2.minAreaRect(cnt)
        # calculating koefficient between width and height to understand if shape is looks like traffic sign or not
        koeff_p = 0
        if w>=h and h != 0:
            koeff_p = w/h
        elif w != 0:
            koeff_p = h/w
        if koeff_p > 2: # if rectangle is very thin then skip this contour
            continue 

        # compute the center of the contour
        M = cv2.moments(cnt)
        cX = 0
        cY = 0
        if M["m00"] != 0:
            cX = int(M["m10"] / M["m00"])
        cY = int(M["m01"] / M["m00"])
        # transform cropped image coordinates to real image coordinates
        cX +=left_x
        cY +=top_y

        dist_c_p = math.sqrt(math.pow((center_x-cX),2) + math.pow((center_y-cY),2))
        # skipping small contours close to the left and right sides of picture
        # remember res squares from test_1.jpg files? :)
        if dist_c_p > dist_[0] and dist_c_p <= dist_[1] and perimeter < 30:
            continue
        if dist_c_p > dist_[1] and dist_c_p <= dist_[2] and perimeter < 50:
            continue
        if dist_c_p > dist_[2] and perimeter < 70:
            continue
        # 0,15 - try to use different koefficient to better results
        approx_c = cv2.approxPolyDP(cnt,0.15*cv2.arcLength(cnt,True),True) #0,15 - lower - more objects higher-less
        if len(approx_c)>=3: # if contour has more then two angles...
            # calculating parameters of rectangle around contour to crop ROI of porential traffic sign
            x,y,w_b_rect,h_b_rect = cv2.boundingRect(cnt)
            #cv2.rectangle(image,(cX-int(w_b_rect/2)-10,cY-int(h_b_rect/2)-10),(cX+int(w_b_rect/2)+10,cY+int(h_b_rect/2)+10),(255,0,0),1)
            # put this ROI to images array for next recognition
            images.append(image[cY-int(h_b_rect/2)-3:cY+int(h_b_rect/2)+3, cX-int(w_b_rect/2)-3:cX+int(w_b_rect/2)+3])
            # save to the file - will be skip later TODO
            cv2.imwrite("%recogn.jpg" % multiangles_n,image[cY-int(h_b_rect/2)-3:cY+int(h_b_rect/2)+3, cX-int(w_b_rect/2)-3:cX+int(w_b_rect/2)+3])
            #increasing multiangles quantity
            multiangles_n+=1

    # contours in second image (thresh)
    # in this picture we are only use RETR_EXTERNAL contours to avoid processing for example windows in yellow and red houses
    # and holes between plants etc
    image2,contours2,_= cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    #print("Contours total at second image: "+str(len(contours2)))

    # make first 10% biggest contours +- of elements
    contours2 = sorted(contours2, key = cv2.contourArea, reverse = True)[:int(len(contours2)/10)]

    for cnt in contours2:
        #calculating perimeter
        perimeter = cv2.arcLength(cnt,True)
        # if perimeter id too big or too small and is not convex skipping
        if perimeter>200 or perimeter<20 or cv2.isContourConvex=='False':#25 - lower - more objects higher-less
            continue

        #calculating rectangle parameters of contour
        (x,y),(w,h),angle = cv2.minAreaRect(cnt)
        # calculating koefficient between width and height to understand if shape is looks like traffic sign or not
        koeff_p = 0
        if w>=h and h != 0:
            koeff_p = w/h
        elif w != 0:
            koeff_p = h/w
        if koeff_p > 2: # if rectangle is very thin then skip this contour
            continue

        # compute the center of the contour
    M = cv2.moments(cnt)
        cX = 0
        cY = 0
        if M["m00"] != 0:
        cX = int(M["m10"] / M["m00"])
        cY = int(M["m01"] / M["m00"])

        # transform cropped image coordinates to real image coordinates
        cX +=left_x
        cY +=top_y

        dist_c_p = math.sqrt(math.pow((center_x-cX),2) + math.pow((center_y-cY),2))
        # skipping small contours close to the left and right sides of picture
        if dist_c_p > dist_[0] and dist_c_p <= dist_[1] and perimeter < 30:
            continue
        if dist_c_p > dist_[1] and dist_c_p <= dist_[2] and perimeter < 50:
            continue
        if dist_c_p > dist_[2] and perimeter < 70:
            continue

        approx_c = cv2.approxPolyDP(cnt,0.03*cv2.arcLength(cnt,True),True) #0,03 - lower - more objects higher-less
        if len(approx_c)>=3:
            x,y,w_b_rect,h_b_rect = cv2.boundingRect(cnt)
            #cv2.rectangle(image,(cX-int(w_b_rect/2)-10,cY-int(h_b_rect/2)-10),(cX+int(w_b_rect/2)+10,cY+int(h_b_rect/2)+10),(0,255,0),1)
            images.append(image[cY-int(h_b_rect/2)-3:cY+int(h_b_rect/2)+3, cX-int(w_b_rect/2)-3:cX+int(w_b_rect/2)+3])
            cv2.imwrite("%recogn.jpg" % multiangles_n,image[cY-int(h_b_rect/2)-3:cY+int(h_b_rect/2)+3, cX-int(w_b_rect/2)-3:cX+int(w_b_rect/2)+3])
            multiangles_n+=1

    #print(str(multiangles_n) + ' showed multiangles')

    #cv2.imshow('img',image)
    #cv2.waitKey(0)
    #cv2.destroyAllWindows()
    return images

# main program

np.seterr(divide='ignore', invalid='ignore')
start_time = time.time()
image_set = FindImages('test.jpg')

# Resize images
images32 = [skimage.transform.resize(image, (32, 32)) for image in image_set]
print("--- %s seconds ---" % (time.time() - start_time))
for img in images32:
    cv2.imshow('img',img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

"""
start_time = time.time()
FindImages('test2.jpg')
images32 = [skimage.transform.resize(image, (32, 32)) for image in image_set]
print("--- %s seconds ---" % (time.time() - start_time))

start_time = time.time()
FindImages('test3.jpg')
images32 = [skimage.transform.resize(image, (32, 32)) for image in image_set]
print("--- %s seconds ---" % (time.time() - start_time))

start_time = time.time()
FindImages('test4.jpg')
images32 = [skimage.transform.resize(image, (32, 32)) for image in image_set]
print("--- %s seconds ---" % (time.time() - start_time))

start_time = time.time()
FindImages('test5.jpg')
images32 = [skimage.transform.resize(image, (32, 32)) for image in image_set]
print("--- %s seconds ---" % (time.time() - start_time))

start_time = time.time()
FindImages('test6.jpg')
images32 = [skimage.transform.resize(image, (32, 32)) for image in image_set]
print("--- %s seconds ---" % (time.time() - start_time))"""

 

Pictures to understand program (comments is in the text of program)

1. test_1.jpg file.

Define areas of interest. Bigger red area – bigger contour need to be in this area. Small contours which could finded by program like potential traffic signs do not take in final ROI set from areas far from the center. Explanation why -> more close to the sides of image – more bigger traffic sign… Only contours from green zone will take like ROI because no traffic signs could be outside this green area…

2. test_1_canny_crop.jpg

3. test_1_hsv_binary.jpg

3.1 test_1_finded_contours.jpg How all contours looks like You can see on this picture (white circles show center of contours) (this picture is not from program above)

4. test_1_ROI_shapes.jpg

With blue  rectangles showed ROI after all filters where can be potential traffic signs. Not all real traffic signs have founded.

5. test_1_ROI_sapes_colors.jpg

With green rectangles showed ROI after all color filters/ practically all traffic signs have founded except small pedestrian far from camera.

6. All cropped ROI images we have store in array and can store like jpg image for next recognition. As You can see some images not traffic signs.

 

Ok lets start recognition part 2.

Добавить комментарий