When the camera records a sequence of images, some of images are going to show the approach and one image are going to the vehicle large and centered. I want to pick the image where the vehicle is best seen so that 1) I can use it to best represent the sequence on an overview page 2) can feed a few good images to a potential License plate recognition software.
In the sequence to the left, the images around the middle of the sequence gives us what we want. So why not just pick the middle image.
Picking the middle image in the sequence will produce the right image in 70% of the cases.
Problem 1 with picking a photo at a certain position in the sequence is that these photos might work well for vehicles travelling in one direction, but will be completely wrong in the other direction. (Neighborhood recording one-way streets are in luck)
Problem 2: Changing the Motion detection sensitivity will completely change any setting based on position in the sequence.
Problem 3: middle image in a sequences that contains 2 evenly spaced vehicles will show the empty road between the two vehicles.
The question: is it possible to programatically a) select the right photo every time and b) determine when there is more than 1 vehicle in the sequence and split the sequence into two or more sequences, one for each vehicle.
To do this we have to not only look at motion detection but to determine where the moving object is in the photo and then pick the photo where the object is the largest without being partially out of the frame.
The first internet search took me to this fine page, where I found this algorithm
h1 = Image.open("image1").histogram() h2 = Image.open("image2").histogram() rms = math.sqrt(reduce(operator.add, map(lambda a,b: (a-b)**2, h1, h2))/len(h1))
The algorithm is quite popular and quoted many times on the internet when there is talk about comparing images.
What is does is comparing image histogram and returning a single number showing how different the images are, with 0 meaning the images are identical.
Hey piece of Cake! I just have set a threshold value and when the difference is biggest the car is biggest in the photo….
Lets see how that worked in real-life:
- The difference between the 1st (empty road) and 2nd image is low (<10) – so far so good
- As the car approaches the difference to the 1st photo increases – (50, 65, 85) hey I am onto something
- As the car is right in the middle of the picture the difference between the empty road and the photo is 535 – This might actually work
- As the car is leaving the picture stage right the difference drops to 462 – still on track
- Now we have empty road again and the difference is 437 – hey wait this is wrong. How can two pieces of empty road be as different as if there had been a minivan in one photo?
.If you look carefully you can see a slight shade difference between the two images. the Sun moved …. (it has a tendency to do that).
Loading the images into Photoshop and examining the histograms, I can see that the histogram is almost identical but shifted between the images.
After hours of playing with the above algorithm, I can conclude that the algorithm is really good at detecting changes to a photo such as adding text or manipulating a small block of the photo, but it is useless at detecting differences between two scenery photos, because the scenery is constantly changing.
The next algorithm search took me to examining the difference function in the Python Image library (PIL). Again the the entire picture is different ;-(, but reducing the number of colors, and the resolution caused me to generate this image of a minivan, with all the changes contained within the box.
From there I made the current algorithm which is pretty good for a first attempt.
#compare current image with Master and make a box around the change diff_image = ImageOps.posterize(ImageOps.grayscale(ImageChops.difference(master_image, cropped_img)),1) rect = diff_image.getbbox() if rect != None: ImageDraw.Draw(cropped_img).rectangle(rect, outline="yellow", fill=None)
I highlight the detected area of change on the original thumbnail to easily determine effectiveness.
Hmm,I detect the roof, the front bumper, then larger and larger parts of the vehicle, there is definitely room for improvements here.
This is as far as I currently am on detecting the vehicle inside the image, but check back for my next attempt at cracking this algorithm.