Experiment #205 Applied Artificial Intelligence – Assembling the system ⚙️?
Step by Step 2
Within the wide spectrum it offers us, we will use the Facial Recognition and Computer Vision services.
The Facial Recognition service detects the faces of the people who pass through the space we are studying. In addition to identifying them and being able to differentiate them from each other, it also recognizes the facial expressions and provides us the emotion the person is feeling at that specific moment.
The Computer Vision service identifies the brands of the bags that our customers carry with them:
In addition, the Computer Vision service locates the people in our area, giving us the x and y coordinates of the establishment in order to identify the patterns, hot spots and much more.
Assembling the system
Now that we know what the final goal is and the parts that compose it, we’re going to try to assemble the whole system so that it’s able to function as a whole and can provide us the information we need to analyze our business so we can gain a competitive advantage:
- As discussed in the Step by step 1, the first step is to have each of the stores divided into sections that we are interested in studying. In addition to this, we would conduct a study to identify how many cameras would be needed and where they should be located to be able to control the entire space.
- Once this is done, the next step would be to have a system in charge of collecting the photos of each of the cameras along with information like the time, camera from which it was taken, and other metadata that interests us. All the information will be stored in a database (for example, in an instance of SQL Server) that will be available in each store.
- At the end of each day, there must be a process that will take all the information generated on that day and send it to our services in Azure so we can identify faces, emotions, brands, location in the store, etc.
- Finally, all the information generated by the services will be stored in a centralized database (on an Azure SQL Server instance in PaaS) in order to perform the relevant analyses.
With the entry of the new GDPR and to ensure that we don’t send images over the network with customers who haven’t given their explicit consent, we may slightly change the previous assembly:
- The first step is the same. We will have each store divided into sections and we would conduct a study to identify how many cameras would be needed and where they should be located.
- We would need exactly the same system to collect the photos of each of the cameras along with the information about the time and camera from which it was taken and other metadata that interests us. All this information will be stored in the database that will be available in each store. This point would remain intact in both methods.
- At the end of each day, there will be a process that will take all the information generated on that day and process it on the same store system, so we can bring the models of the three Azure services mentioned earlier to process the images as an IoT system in the Edge. With this technique the faces will be stored as a unique, irreversible, and anonymous identifier (for example, as a hash).
- Once we have processed the images, the information will be stored in the same database where the images and raw data have been stored.
- At this point we could delete all raw information so that we would only have anonymous data. What’s more, we could make a third approach that would be to process the images in real time, not having to store any raw images and eliminate any possibility of having any privacy-related problems.
- Finally, once we have stored all the processing information in the database, we will run one last process before the analysis which is an ETL (Extract, Transform and Load) of Business Intelligence to gather all the processed data from all the stores into a centralized database in Azure (in an Azure SQL Server).
Read more of Experiment #205 here.