Wednesday, October 6, 2021

Quick Blue Iris with DeepStack debug

Note this write refers to Blue Iris version which is a beta release so there may be some differences to the version you are using.

Turn on DeepStack save details.

To do real debug you need to turn on Save DeepStack analysis details for the camera. Open camera properties, go to the Trigger tab and click on Artificial Intelligence.
AI options dialog

This will create a ".dat" file in the Alerts folder for each alert with info about the frames DeepStack processed.
Alerts folder listing

Find the alert pic.

If you are here you are probably have this but if you are just not seeing an alert clip for something you know should be there start here. Open the cancelled alerts folder and find the alert picture you think should have triggered and alert. 

Note if there is no alert you have a motion detection issue. DeepStack analysis below sends frames ignoring motion and object to detect settings for the camera.

If you have an alert that you think DeepStack should have triggered an notification for here are some simple things to check:

Does the motion alert have a return 100 or -1 for the memo?

If so DeepStack has crashed or hung. Either way it needs restarted.

Still getting a return 100 or -1 for the memo after restart?

Ran into this the other day on one of my servers. After a restart DeepStack would work for a few minutes and the stop. Looked in the logs (docker logs for Docker installs) and found the return codes of  200 changed to a 400 then 500s. I tried several things. I backed out the last change, restarted and watched to see if I could sort an event causing the crash but it just kept working. So hard to say what to do other than keep restarting DeepStack till it stays running.

Does the motion alert have a nothing found for the memo?

Double click on the alert picture you want to debug and turn on analyze by right clicking in the playing clip and selecting.

Analyze options select 

I find it helpful to turn on overlays as well.
Overlay options selection

Next open the DeepStack console via the status icon.

You will want to restart playing the clip at this point and play it backwards. Note the Alert picture appears to be the last frame of the clip just double clicking on that alert plays forward from that position.

Stop when you get to the place where the thing was not detected that you think should have been.

IDed frame

Now you can compare the saved (.dat file) info against the IDed frame. In the above instance it IDed a "cat" at a few places but checking the 2 frames sent to DeepStack we see the alert clip does not necessarily match. 
First DeepStack frame

Second DeepStack frame

Notice the above is the alert picture and that they are in reverse order in the listing.

If I switch to Analyze with motion detector and play forward from a little before the first sent frame I can then see what motion triggered the alert. It appears to be about half a second after the first sent frame. It appears this is related to the make time and the first frame sent to DeepStack is from the start of the make time and the rectangle below is the end of the make time.

So at least one issue here is that the IDed frame from analyze came between the frames sent. So in some cases (but not this one) playing with the make and break times might help. 

In this case the camera is set to send a picture to DeepStack every 750ms for up to 30 images but BI appears to have only sent 2 frames 2.061 seconds apart. Though looking at the timestamps on the images shows about 4 seconds actually passed between them. This would seem to indicate something it wrong in Blue Iris.

Then there is also the issue with DeepStack not seeing anything in the second sent frame. That would seem to indicate a model issue. Short of creating your own model there is little to be done about that.

It IDed something else in the picture but not the thing I wanted.

Again running analyze on the clip may help. For example in this image below from DeepStack analyze we see this.

And from motion analyze we see this.

You will note there are 6 raccoons in this shot. The lone one is mainly IDed from its tail. Two motion blocks are on a second raccoon by BI's motion detection and the reset are unseen. DeepStack seems to connect the lone raccoon to its tail to ID it as a cat but takes the 4 in a group, along with the dog house as one objects and IDs it as a dog. Note too the confidence was higher in the second clip but the results returned were from the first clip. I could see how this might been interpreted as a bug though whether to return the min, max, average, confirmed, first or last could be a matter of taste or best fit for a given target. This also might give some insight into tweaks in motion detection settings that might help you get better notifications.

Thursday, August 26, 2021

Simple copy paste reply for "what cam / system should I get?" posts.

People seem to not read the links so I've made this reply to copy paste in.

The probs with asking for info like this are that setups can be VERY subjective as to what is good enough and you mostly get I like or hate X for replies even if you have specs for what you need.

First thing you need to think about is what is it you expect to see. For cameras there are DORI (Detection, Observation, Recognition, Identification) numbers that tell you at how many feet you can expect what level of detail you can see. So pick a level, Detection, Observation, Recognition or Identification that you can live with first or you are just wasting your time and money. If you really have no idea then stop here and get something cheap like a Blink Mini to get a feel for what you can and can not see.

Before looking at brands sort the number cams and specs each needs to be to see that level of detail at the distance each camera is expected to monitor. Here are instructions for a free online tool that lets you virtually add cams to your site and play with specs to see coverage and expected results.
Then you can make a plan for what you will need and not end up tossing stuff for upgrades.Remember you do not need to do it all cameras, lights and sensors at once so this can also help prioritize the order to add things. 

Avoid WiFi wherever you can. Since you need to get power out there anyway you should look at Ethernet Over Power adapters where running Ethernet cable is not practical.

That said the $100-200 per cam range seems to be the sweet spot for bang for the buck. Though as I've said some will be fine with cheaper and others will not consider any cam under $1K. So sorting what works for you is key. My "go tos" if you have a bit of light and your target is under 50 feet max are ColorVus. For further I like Dahua varifocals (zooms). But I have a dozen brands, much less models in use.

As far as a NVR you will want to make a list of features you want and then see which of the ones that has those looks easiest for you to work with. Personally I like Blue Iris. Mainly because it is so easy to interface with things like home automation and AI systems. Plus relatively simple to upgrade a PC to well beyond the limits of a standalone NVRs. But for example if you wanted camera and radar linkage you might be better off with a Dahua setup. Of course we are talking serious money there. Either way leave room to grow as odds are you will want to add to your system.

Even if you have the bandwidth for recording to the cloud I would only plan on it as a backup. The ultimate setup would be recording 24/7 to a SD card in the cam and a local NVR then backing up alerts to the cloud. As a rule being able to record 24/7 means no battery powered cams.

Lastly think about supplemental lighting like accent lighting and or IR floods as in camera lights tend to draw bugs and spiders. You will want to place these extra lights off axis from the cam to avoid glare issues too.