Experiment #206 Brand Presence
Introduction
Companies have always struggled to have presence in the sector. Each year, companies invest huge amounts of money to appear in strategic locations and to be able to achieve a greater volume of sales, customers and what–nots. But, do they conduct studies and traceability of:
- Where has my brand appeared?
- For how long?
- How many people have seen my brand?
- In which medias?
- What impact has it had?
There is no doubt about the repercussion that having a presence in these strategic places provides. However, if we enrich it with a posteriori studies that are capable of providing us with information for decision-making, our return on investment (ROI) is guaranteed.
This problem creates real headaches when having to deal with large data from different sources and their subsequent analysis. But, thanks to artificial intelligence and parallel data processing, this is possible.
Architecture
The high-level architecture capable of satisfying the earlier mentioned needs would be the following:
Roughly speaking, what we would have would be a series of origins from which we will analyze when the brand is present and how many viewers it is reaching. Among these origins, we can find:
- Twitch platform
- YouTube
- Sports such as football, tennis, MotoGP, basketball…
- TVchannels
- Radio
- Socialnetworks like Instagram, Facebook…
These are just a few examples of the wide range of possibilities that we have, being able to add as many as we want. Let’s see it a little more in detail:
We would start from the base of having a number of services in Python that would be in charge of capturing and connecting to the different APIs and sources. In addition to connecting, they would make the necessary cuts to keep the data of interest.
This data would then be sent to both Azure Cognitive Services and custom AI models that will be the responsibles for identifying the brand in each of the data sources.
Finally, all this information will travel through Azure Data Factory where the Business Intelligence’s own Extract, Transform and Load (ETL) process will take place and this information will be transformed, analyzed and joined to the other data in our central data unit constituted by the Data Warehouse.
This last part will be where we nurture the subsequent analyses performed with PowerBI that will be what we make available to the corresponding department for decision making.
Read more of Experiment #206 here.