Architectural Diagram Experiment #103

Experiment #103 researches the possibilities of applying Cognitive Services to create an access control system with Internet of Things only managed by Artificial Intelligence by giving it the capabilities of speaking human languages and recognizing human faces.

 

 

 

 

First, we created a Windows Universal Application to run the project on desktop and mobile. This app allows us to register the faces of our whitelisted users (01). These people will be the ones with access permissions.

The app connects to Azure (02) and its Cognitive Services (03) to use the Text to Speech (04) service and the Face (05) service we have previously deployed.

With the Face cognitive service, we can store all users by adding their photos to a Person Group List (06) in the cloud.

There are reverse arrows in the diagram because the process works also for consulting the Person Group list and tell us if the visitor is registered on it.

The Step By Step Process of the app creation can be found in this previous post.

 

 

 

When we have a Person Group list already created, we can use a modified app in our IoT Device under Windows IoT Core.

On this side, a visitor (07) pushes the doorbell button and the device takes a photo of him. This image is sent to Azure (08) and managed by Cognitive Services (09).

The app sends a request to the Face API service (11) and it checks if the visitor is a whitelisted user (12).

The IoT device, trough the Text to Speech service, tells the visitor (10) if he is welcome or not. If so, the door will open thanks to a servo motor.

The Step By Step process that shows you how to set up the hardware can be found in this post.

The Step By Step post with the IoT device and 3D printing information can be found in this link.

 

 

Stay up to date!



Leave a comment