It’s an interesting development, but some questions persist in the use of the nascent technology. The bot requires configurations to id facial template which is then inferred to an emotion.
Was bot configured to Raila’s expressions quantitatively so the results in this test would prove qualitatively the expression the bot was reading was validated?
How was the Bot’s configuration for facial expressions produced?
If it’s reading limited points is it possible biometric data could yield the contrary? Otherwise, is it also possible by maintaining an expression or say poker face the result is the bot is reading the data but it does not align with the hosts true intentions?
Can a bot be unbiased based on data sets or facial recognitions it’s been fed?
Do you plan to add symptomatic readings of any kind to validate the results?
How cautious should marketeers be at extrapolating data on facial recognition, with the lack of biochemical data.