Hello,
The AVRcam does a great job at detecting distinct colors such as white on green. In this case, I bet you will be fine in the normal RGB mode of tracking. This is what I used at the Chibots line-following competition last month, and it worked exactly as I had intended. I only wish I had spent a little more time on the algorithm to ignore any seemingly "white" part of the image that wasn't connected to the line I was tracking (in this case, there were some reflections from an overhead skylight that screwed me up...the AVRcam reported them back as trackable objects, and I should have done a better job in my main controller application to ignore them).
The question I want to ask is, will it give more accurate/stabler result if the tracking is done in YCrCb color space? Because we are worry about the lightning condition in the competition field, and heard that if it's done in YCrCb, it's more resistence to illumination changes, is that the case?
Yes, YCrCb is more resistant to changes in the lighting conditions. I have toyed around with this a bit, because, as you said, it is as easy as setting a register on the OV6620 to change its output to this color model. It seems to work ok, but since its hard for me to natually envision luma/chroma data, I have been sticking with the RGB model for now. This is definitely an area where some experimentation would be useful though.
(Put aside that the AVRCamVIEW will give strange framedump)
Again, correct. AVRcamVIEW does display a "funny" version of the image, but you can still make out what the objects are.
One other thing to note about using the AVRcam for line tracking...the stock software on the mega8 will track up to 8 objects, but when tracking a line, the entire line appears as one contiguous object (because it is). This results in one bounding box being returned over the user-interface. I made a one-line change to the firmware that set the maximum height of an object to around 17 pixels. Any time an object over 17 pixels "tall" was encountered, a new tracked object had to be added in, instead of just adjusting the current tracked object. This allowed the system to break the tracked line up into 8 chunks, and report back the bounding-box info for each chunk. So when the object turned to the left or right, the AVRcam reported back a nice progression of tracked chunks that mapped to the contour of the line. Let me know and I can send you the single line change to make this work (or maybe I should just post it to the Download section, if more people would be interested in it).
Good luck, and keep us posted...