Mean Shift Tracking
Mean shift is a non-parametric feature-space analysis technique, a so-called mode seeking algorithm. It is a procedure for locating the maxima of a density function given discrete data sampled from that function. In a sense, it is using a non-parametric density gradient estimation. It is useful for detecting the modes of this density.
The following video shows the process of finding where the maxima is.
The video below shows one some of the applications of mean shift tracking algorithm.
Pros | Cons |
---|---|
Application independent tool | The window size (bandwidth selection) is not trivial |
Suitable for real data analysis | Inappropriate window size can cause modes to be merged, or generate additional "shallow" modes. In that case we need to use adaptive window size |
Does not assume any prior shape (such as elliptical) on data clusters | |
Can handle arbitrary feature spaces | |
Only 1 parameter to choose | |
window size has a physical meaning, unlike K-Means |
Table source: from Weizmann Institute of Science
import numpy as np import cv2 cap = cv2.VideoCapture('videos/slow_traffic_small.mp4') # take first frame of the video ret,frame = cap.read() # setup initial location of window # r,h,c,w - region of image # simply hardcoded the values r,h,c,w = 200,20,300,20 track_window = (c,r,w,h) # set up the ROI for tracking roi = frame[r:r+h, c:c+w] hsv_roi = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.))) roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180]) cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX) # Setup the termination criteria, either 10 iteration or move by at least 1 pt term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 ) while(1): ret ,frame = cap.read() if ret == True: hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1) # apply meanshift to get the new location ret, track_window = cv2.meanShift(dst, track_window, term_crit) # Draw it on image x,y,w,h = track_window img2 = cv2.rectangle(frame, (x,y), (x+w,y+h), 255,2) cv2.imshow('img2',img2) k = cv2.waitKey(60) & 0xff if k == 27: break else: cv2.imwrite(chr(k)+".jpg",img2) else: break cv2.destroyAllWindows() cap.release()
To capture a video, we need to create a VideoCapture object. Its argument can be the name of a video file. Also while displaying the frame, use appropriate time for cv2.waitKey(). If it is too less, video will be very fast and if it is too high, video will be slow, and this is the way how we can display videos in slow motion. 25 milliseconds will be OK in normal cases.
cap.read() returns a bool (True/False). If frame is read correctly, it will be True. So you can check end of the video by checking this return value.
We convert BGR image to HSV so that we can use this to extract a colored object. In HSV, it is more easier to represent a color than RGB color-space.
cv2.inRange() can be used to set threshold the HSV image to get certain color.
The functions calcHist() calculate the histogram of one or more arrays. The elements of a tuple used to increment a histogram bin are taken from the corresponding input arrays at the same location.
cv2.calcHist(images, channels, mask, histSize, ranges[, hist[, accumulate]])
The parameters are:
- images - Source arrays. They all should have the same depth, CV_8U or CV_32F, and the same size. Each of them can have an arbitrary number of channels.
- nimages - Number of source images.
- channels - List of the dims channels used to compute the histogram. The first array channels are numerated from 0 to images[0].channels()-1 , the second array channels are counted from images[0].channels() to images[0].channels() + images[1].channels()-1, and so on.
- mask - Optional mask. If the matrix is not empty, it must be an 8-bit array of the same size as images[i] . The non-zero mask elements mark the array elements counted in the histogram.
- hist - Output histogram, which is a dense or sparse dims -dimensional array.
- dims - Histogram dimensionality that must be positive and not greater than CV_MAX_DIMS (equal to 32 in the current OpenCV version).
- histSize - Array of histogram sizes in each dimension.
- ranges - Array of the dims arrays of the histogram bin boundaries in each dimension.
The functions calcBackProject() calculate the back project of the histogram. That is, similarly to calcHist(), at each location (x, y), the function collects the values from the selected channels in the input images and finds the corresponding histogram bin. But instead of incrementing it, the function reads the bin value, scales it by scale, and stores in backProject(x,y). In terms of statistics, the function computes probability of each element value in respect with the empirical probability distribution represented by the histogram.
Here is the primary function of this chapter:
cv.meanShift(prob_image, window, criteria)
The parameters are:
- probImage - Back projection of the object histogram.
- window - Initial search window.
- criteria - Stop criteria for the iterative search algorithm.
Here is the input video: slow_traffic_small.mp4
Here is the output from the code:
OpenCV 3 Tutorial
image & video processing
Installing on Ubuntu 13
Mat(rix) object (Image Container)
Creating Mat objects
The core : Image - load, convert, and save
Smoothing Filters A - Average, Gaussian
Smoothing Filters B - Median, Bilateral
OpenCV 3 image and video processing with Python
OpenCV 3 with Python
Image - OpenCV BGR : Matplotlib RGB
Basic image operations - pixel access
iPython - Signal Processing with NumPy
Signal Processing with NumPy I - FFT and DFT for sine, square waves, unitpulse, and random signal
Signal Processing with NumPy II - Image Fourier Transform : FFT & DFT
Inverse Fourier Transform of an Image with low pass filter: cv2.idft()
Image Histogram
Video Capture and Switching colorspaces - RGB / HSV
Adaptive Thresholding - Otsu's clustering-based image thresholding
Edge Detection - Sobel and Laplacian Kernels
Canny Edge Detection
Hough Transform - Circles
Watershed Algorithm : Marker-based Segmentation I
Watershed Algorithm : Marker-based Segmentation II
Image noise reduction : Non-local Means denoising algorithm
Image object detection : Face detection using Haar Cascade Classifiers
Image segmentation - Foreground extraction Grabcut algorithm based on graph cuts
Image Reconstruction - Inpainting (Interpolation) - Fast Marching Methods
Video : Mean shift object tracking
Machine Learning : Clustering - K-Means clustering I
Machine Learning : Clustering - K-Means clustering II
Machine Learning : Classification - k-nearest neighbors (k-NN) algorithm
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization