Moving object detection is an important research topic of computer vision and video processing areas. Detection of moving objects In video streams is the first relevant step of information extraction in many computer vision applications.. This paper puts forward an improved background subtraction of moving object detection of fixed camera condition.

Then combining the adaptive background subtraction with symmetrical differencing obtains the integrity foreground image. Using chromaticity difference to eliminate the shadow of the moving target, effectively distinguishes moving shadow and moving target. The results show that the algorithm could quickly establish the background model and detect integrity moving target rapidly. Moving object detection is an important part of digital image processing techniques and it is the base of the many following sophisticated processing task such as target recognition and tracking, target classification, behavior understanding and analysis .Aside from the intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving objects provides a focus of attention for recognition, classification and activity analysis. The technology has a wide application prospect such as smart monitor, autonomous navigation, human computer interaction, virtual reality and so on.

This paper studies the method of obtaining the data of moving object from video images by background extraction. Object detection requires two steps: background extraction and object extraction. Moving object detection needs static background image. Since each frame of video image has moving object then background extraction is necessary. Each frame image subtracting the background image can get the moving object image.

This is object extraction. Then the moving object detection can be achieved. This paper firstly introduces two moving object detection algorithms of fixed scenes — frame difference method and moving edge method and analyzes their advantages and disadvantages, and then presents a new algorithm based on them, lastly gives the experimental results and analysis Background extraction of moving objectBackground extraction means that the background, the static scene, is extracted from the video image. Because the camera is fixed, each pixel of the image has a corresponding background value which is basically fixed over a period of time. Well known issues in background extraction include 1)Light changes: background model should adapt to gradual illumination changes.

2)Moving background: background model should include changing background that is not of interest for visual surveillance such as moving trees 3) Cast shadows: the background model should include the shadow cast by the moving objects that apparently behaves itself moving in order to have a more accurate detection of moving object shape. 4)Bootstrapping: the background model should be properly setup even in absence of a complete and static training set at the beginning of the segment 5) Camouflage: moving objects should be detected even if their chromatic features are similar to those of thebackground model. .Calculation of consecutive frames subtractionThe method utilizes current two frames or the differences between the current frame and its previous frame to extract a motion region. In this paper, we adopt its improvement methods namely symmetrical differencing, that means image differences of the three current frames. This method can remove effects of unveiling background which is caused by motion, accurately obtain contour of moving targets.

In the conventional background subtraction method, a fixed reference background model for the intended surveillance area is constructed in advance. The conventional background subtraction method extracts moving targets based on the difference between the current image and the reference background model. It works well for applications in controlled environments, in which a constant illumination scenario can be achieved artificially. However, for other visual tracking applications such as traffic monitoring and security/surveillance, the illumination conditions change over time so that a fixed reference background model is not realistic and may eventually lead to a detection failure.

Consequently, construction and maintenance of a reliable and accurate reference background model is crucial in background subtraction based motion detection approaches. Figure 1 algorithm for background subtraction Typical moving object detection algorithmsFrame difference methodTo detect moving object in the surveillance video captured by immobile camera, the simplest method is the frame difference method for the reason that it has great detection speed, can be implemented on hardware easily and has been used widely. While detecting moving object by frame difference method, in the difference image, the unchanged part is eliminated while the changed part remains. This change is caused by movement or noise, so it calls for a binary process upon the difference image to distinguish the moving objects and noise. Connected component labeling is also needed to acquire the smallest rectangle containing the moving objects.

The noise is assumed as Gaussian white noise in calculating the threshold of the binary process. According to the theory of statistics, there is hardly any pixel which has dispersion more than 3 times of standard deviation. Thus the threshold is calculated as following: T a‚¬A?a‚¬A u a‚¬A«a‚¬A 3A¶ While u is the mean of the difference image A¶ a‚¬A is the standard deviation of the difference image. The flow chart of the detecting process by frame method is shown in fig 2 Fig 2 Frame Differencing Method Moving edge methodDifference image can be regarded as time gradient, while edge image is space gradient. Moving edge can be defined by the logic AND operation of difference image and the edge image . The advantage of frame difference method is its small calculation, and the disadvantage is that it is sensitive to the noise.

If the objects do not move but the brightness of the background changes, the results of frame difference methods may be not accurate enough. Since the edge has no relation with the brightness, moving edge method can overcome the disadvantage of frame difference method. The flow chart of the detecting process by moving edge method is shown in fig 3 Fig 3. Moving edge method Improved Moving object detection algorithm based on frame difference and edge detectionMoving edge method can effectively suppress the noise caused by light, but it still has some misjudgments to some other noise.

This paper proposes an improved algorithm based on frame difference and edge detection. Upon analysis, the method has better noise suppression and higher detection accuracy. 1. Algorithm introduction The flow chart of the detection process by using the method based on frame difference and edge detection presented in this paper Fig 4. Improved Algorithm The steps of new algorithm presented in this paper are as follows.

(1) Get edge images Ek-1 and Ek by edge detection with two continuous frames Fk-1 and Fk by using Canny edge detector. (2) Get edge difference image Dk by difference between Ek and Ek-1. (3) Divide edge difference image Dk into some certain small blocks and count the number of non-zero pixels in the block, and recorded it as Sk. (4) If Sk is larger than the threshold, mark the block is a moving area, otherwise it is a static area.

Let 1 presents moving area and 0 presents static area, we can get a matrix M. (5) Do connected components labeling to M, and remove the connected components that are too small. (6) Get the smallest rectangles containing the moving objects. The algorithm has improved both the object Segmentation and object locating.

.2 Object segmentation Object segmentation is to divide the image into moving area and static area. The algorithm presented in this paper will get the edge images first,then difference them to get the edge difference image. In thefinal image we get, the pixel value of background area equal to 0 and pixel value of the edge of movingobjects equal to 1.

Now we will compare the difference between our algorithm and moving edge method (1) In moving edge method, assume two continuous frames are Fk-1 and Fk, background is B, moving objects are Mk-1 and Mk, and independent white noise is Nk-1 and Nk for two frames each. Then we can have So we can get the difference between two frames: Use Canny edge detection with frames Fk. We can get edge image Ek. Then we can get the result: EMk, ENk are edge images caused by Mk and Nk each. Define signal noise ratio is While SEM is the number of edges caused by moving objects, and SEN is the number of edges caused by noise. Then we know the SNR of the moving edge method is (2) In our method, we first get edge images by edge detector: Then by difference we get Since in the practical system, the difference between two edge images is absolute value of the difference value and the edges of two images are not the same when the objects are moving So actually in the edge difference image we can have the sum of the edges of two frames.

Because the noise is independent and two frames are dependent with each other, we can have The SNR in our algorithm is It shows that the SNR in our algorithm is less than the moving edge method. Our method will work more efficiently. 3..

Detection of moving cast shadowsTo prevent the moving shadows being misclassified as moving objects or parts of moving objects, this paper represents an explicit method of detection of moving cast shadows on a dominating scene background. These shadows are generated by objects between a light source and the background. Moving cast shadows cause a frame difference between two succeeding images of a monocular video image sequence. For shadow detection these frame differences are detected and classified into regions covered and regions uncovered by a moving shadow. The detection and classification assume plane background and a non negligible size and intensity of light sources. A cast shadow is detected by temporal integration of the covered background regions while subtracting the covered background regions.

The shadow detection method is integrated into an algorithm for 2D shape estimation of moving objects. The extended segmentation algorithm compensates first apparent camera motion. then a spatially adaptive relaxation scheme estimates a change detection mask for two consecutive images. An object mask is derived from the change detection mask by elimination of changes due to background uncovered by moving objects and the elimination of changes due to background covered or uncovered by moving cast shadows.

Experimental results and analysisIn this paper, an improved moving object detection algorithm based on frame difference and edge detection is brought forward The operating environment is Windows XP. Programming environment is Matlab 8.0. Size of the sequence image is 640A?-480. Partial stimulation results are as follows.

From the results we can see that the improved moving object detection algorithm based on frame difference and edge detection has much greater recognition rate and higher detection speed than several classical algorithms. This algorithm will appear individual false under more complicated background.There is still room for improvement. V. Conclusion This paper presents an improved moving object detection algorithm based on frame difference and edge detection.

This method not only retains the small calculation from frame difference method and the impregnability of light from edge detection method, but also improves in noise restraining. Meanwhile, it divides the image to small blocks to do connected component labeling, significantly speeding up the detection. Experimental results show that the algorithm has great recognition rate, high speed, and will be a good candidate for practical systems Acknowledgment I would like to thank my guide Prof.(Dr).A.

P.Dhande the constant encouragement & assistance he provided me at every stage of the preparation of this paper. I am very grateful to Mr.Mahesh Khadtare for his valuable suggestions and help during this paper implementation.

.