How to upgrade - or downgrade - a Lampix base image?

I’m trying to get the example-fruits code operational, and I’ve determined that the image I’m using for the device doesn’t seem to operate with the code. I’ve run the code in the simulator, so I know my code “should work”, but the Lampix instance simply hangs as soon as it’s run.

I used a basic cheat and tried to print something to a div tag; it doesn’t render to the div in the device buy does in the simulator :slight_smile:

example-fruits is only meant to showcase how to build an app and will only run on the simulator. I’ll add a note in the repo and the documentation. More below.

The simulator is more permissive in what can run on it, as you can instruct it to use any watcher name and / or neural_network_name, and it will simply prompt you to select the watcher name / neural network name combo, which you can use by clicking somewhere in the watcher’s registered area.

Lampix, however, needs to load the corresponding watcher name and / or neural network specified, which will not work if it cannot find either. Right now, “fruits” is not a neural network we provide.
When we will provide it, it will work with oranges, lemons and limes at first, and will be documented as such. example-fruits will, at the point, take into account the varying sizes of the physical elements placed under the camera of the device, which it doesn’t right now.

Which neural networks are currently supplied, other than fingers? Also, how will we create new ones? I order to write a detection program and register objects for a ‘billion object database’, we will need to start creating these networks and hooks to objects, clearly.


Currently we only provide the neural network that is used to detect fingers.
We will be providing tutorials for creating the “watchers” that you will be able to use with your custom neural networks and eventually create applications that can detect other types of objects.

Soon we will be launching the PIX Ecosystem and we will provide the “Trainer App” that you will be able to use for submitting pictures to the “billion object database”.

1 Like

Any eta when we will be able to create watchers and networks?? I’m working on something to detect when a small object its placed down, and fingers doesn’t do… I’m looking at the depth classifier from rainy sounds since I can place down an object (my goal) but it’s a smaller one (a DND miniature)

Hi @Aciolino,

Please keep an eye on the upcoming releases. We will be posting the release notes prior to pushing the updates.
Unfortunately the custom watcher support is not scheduled for our next release, but it will be included in one of our following releases.

Thank you for your patience!

Ok, so not yet on custom watchers… Can we calibrate out time the level of sensitivity on the size of an item on the table surface? Large items were detected easily ob rainy sounds, but small ones are missed.

Unfortunately, we don’t currently offer a way to calibrate that sensitivity.

This has to do with the information provided by the camera that is not consistent enough to allow for any type of response other than “something MIGHT be there”, whereas the response we’d like to provide is “something is there and this is its shape”. For the time being, only objects taller than a threshold are reported on.

If this changes, it will either be in the form of a JavaScript parameter to allow sensitivity to be configured however the developer decides it should be, or it will simply detect everything between the camera and the surface.