It’s an interesting development, but some questions persist in the use of the nascent technology. The bot requires configurations to id facial template which is then inferred to an emotion.

Was bot configured to Raila’s expressions quantitatively so the results in this test would prove qualitatively the expression the bot was reading was validated?

How was the Bot’s configuration for facial expressions produced?

If it’s reading limited points is it possible biometric data could yield the contrary? Otherwise, is it also possible by maintaining an expression or say poker face the result is the bot is reading the data but it does not align with the hosts true intentions?

Can a bot be unbiased based on data sets or facial recognitions it’s been fed?

Do you plan to add symptomatic readings of any kind to validate the results?

How cautious should marketeers be at extrapolating data on facial recognition, with the lack of biochemical data.

Cheers

David

Written by

Top Writer & Creative Technologist, Int. Award Winner. Cinemajournalist. Cardiff Uni @jomec. PhD (Dublin). Visiting Prof UBC, Ex BBC/C4News. Apple profiled.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store