Wednesday, September 28, 2022

DeepStack vs Sense AI and improving recognition.

 First they both use:

  •  YOLO models to compare found objects to. 
  • The Sense AI API is a superset of the DeepStack API so something written for DeepStack should work with Sense AI too.
  • They use the same coding languages.

The main diffs between DeepStack and Sense AI are:

As explained in my DeepStack posts a lot of your success depends on:

  • Mode engine is run at. Medium is default. High will yield better results but at a cost of higher load. The reverse is true of Low.
  • Mode model was created for. The model ought to be made for the same or higher mode than the engine will be run at to see improved performance.
  • Trigger image might not be the best for recognition. You will want to tweak number and frequency of images sent to the AI when triggered to get better confidence in what is detected.
  • Same goes for minimum confidence percentages. Too low and you will get lots of wrong detections. Too high and you will miss things. Best to start low and see what object you want to know about are detected as what and at what confidence.
  • How close objects used in training match the ones in images used in detection. This is the biggie and why so many look to create their own custom models. Both for object types not in the default model and to more closely match expected targets in lighting and coloring. You can find tools for simplifying making your own custom models in my deepstack repo.


1 comment: