Demonstration (in our ad-hoc home museum) of the prototype where you search for the owner together with Van Gogh’s dog.

As a case-study we chose Van Gogh’s dog, which is looking for its owner.

You go out with the dog and show him all kinds of paintings. The dog recognizes Vincent van Gogh’s self-portraits. He is of course very happy to see his owner and tells you something about his life with Vincent at the portrait. 

Fun fact: when Van Gogh lived in Zundert, he actually had a dog living with him!

Interaction design

Adding image recognition to our Tellmies required a new interaction design. After all: when should the dog go “look” and say something? What if he sees something he doesn’t recognize? How do we make sure the user knows when the dog is looking? 

We chose to recognize three different types of visual input. First, of course, the dog recognizes self-portraits. The dog responds enthusiastically to this with a nice story about its life with Vincent. 

In addition, the dog also recognizes other paintings. Not self-portraits by Vincent, but a landscape or another portrait. The dog then lets you know that this is not his owner.

“Can I have a good look?”

As a “trigger” to start processing the visual input, we chose to hold the dog still for a few seconds. This is easy to detect with a motion detector and also feels logical to the user: after all, the dog wants to be able to “take a good look” at the painting.

Which ‘discarded’ dog do we choose for the prototype?
The chip as it is in the dog, in a cardboard protective box and with a slot for the connection to the camera in its nose (prototype).

After a number of tests, we chose a lightning-fast chip that is only just on the market and can recognize – among many other exciting functions – visual input almost ten times per second. 

The pre-learned objects are encoded in a representation of a human brain; a neural network, with images supplied by the camera in the dog’s nose. 

The objects are learned by pre-photographing and labeling the places you walk around with the dog. This allows us to distinguish between objects the dog has to say something about (paintings), and objects the dog does not react to, such as floors, ceilings, tables, doors and people. Fortunately, this goes at a rapid pace, so we only spend a few hours doing this, depending on the diversity of spaces involved in the tour. 

The images are delivered in colour, so that not only the outline, but also the colour is “seen” as a distinguishing factor. This significantly reduces the error margins, we have found. 

The result: a super smart device, invisibly hidden in the dog’s head.


In this new Tellmie, we wanted to hide the technology. We bought the current dog at the thrift store (and washed it!). 

The camera fits right in his nose and the hardware is in his head and body.  

Great for a demonstration! 

Also got an idea for an audio tour with image recognition? Get in touch with us!

And the camera fit right in his nose.