Applications

Examples

Tracking store activity

Sample: Using the Google Cloud Vision API Machine Learning service

MAGELLAN BLOCKS makes it easy to use Google’s powerful image analysis tool, the Google Cloud Vision API. Used with a store surveillance system, this could lead to improvements in store operations, marketing, and more.

Demo video

The following video demonstrates how to use BLOCKS to automatically analyze images with the Cloud Vision API and store the results into BigQuery.

Monitoring store activity with the Google Cloud Vision API and MAGELLAN BLOCKS

Explanation

The Flow in the video connects a Vision API BLOCK and a Stream Insert BLOCK to analyze images and store the results into Big Query.

  1. Vision API BLOCK: Analyzes images that have been stored into Google Cloud Storage (GCS).
  2. Stream Insert BLOCK: Stores data into a BigQuery table.

So how does just connecting these two BLOCKS make this possible? We’ll explain what each BLOCK does in more detail below.

Vision API BLOCK

First, the Vision API BLOCK calls on the Google Cloud Vision API to analyze images that have been stored into GCS. Then, the Cloud Vision API returns its analysis results as JSON-format data and the Vision API BLOCK stores this data into a variable. You can then access these results by referencing this variable within subsequent BLOCKS in the Flow.

For example, we tested out using the Vision API BLOCK to analyze the image below:

kumamoto_2
"Kikuchi Gorge", Kumamoto, Japan, METI, Creative Commons 4.0 International

This returned the following JSON data (to keep things simple, we selected only “object recognition” for the type of image analysis). By default, the Vision API BLOCK stores analysis results into a variable named _ (underscore) [1].

{
  "labelAnnotations": [
    {
      "mid": "/m/05fblh",
      "description": "habitat",
      "score": 0.96244305
    },
    {
      "mid": "/m/05h0n",
      "description": "nature",
      "score": 0.9445862
    },
    {
      "mid": "/m/0j2kx",
      "description": "waterfall",
      "score": 0.9099796
    },
    {
      "mid": "/m/03ktm1",
      "description": "body of water",
      "score": 0.89911616
    },
    {
      "mid": "/m/03ybsm",
      "description": "creek",
      "score": 0.89159125
    },
    {
      "mid": "/m/01fnns",
      "description": "vegetation",
      "score": 0.8860265
    },
    {
      "mid": "/m/0j6m2",
      "description": "stream",
      "score": 0.8816693
    },
    {
      "mid": "/m/038hg",
      "description": "green",
      "score": 0.88010365
    },
    {
      "mid": "/m/023bbt",
      "description": "wilderness",
      "score": 0.87627053
    },
    {
      "mid": "/m/0838f",
      "description": "water",
      "score": 0.86826664
    },
    {
      "mid": "/m/03kj4q",
      "description": "watercourse",
      "score": 0.8573946
    },
    {
      "mid": "/m/02py09",
      "description": "natural environment",
      "score": 0.8488707
    },
    {
      "mid": "/m/06cnp",
      "description": "river",
      "score": 0.84827423
    },
    {
      "mid": "/m/07j7r",
      "description": "tree",
      "score": 0.844367
    },
    {
      "mid": "/m/02mhj",
      "description": "ecosystem",
      "score": 0.8341271
    },
    {
      "mid": "/m/02zr8",
      "description": "forest",
      "score": 0.8069322
    },
    {
      "mid": "/m/090j23",
      "description": "water feature",
      "score": 0.7492971
    },
    {
      "mid": "/m/0hnc1",
      "description": "woodland",
      "score": 0.7100596
    },
    {
      "mid": "/m/0d8cn",
      "description": "rainforest",
      "score": 0.69635445
    },
    {
      "mid": "/m/09t49",
      "description": "leaf",
      "score": 0.6817611
    },
    {
      "mid": "/m/06gl1",
      "description": "rapid",
      "score": 0.6793933
    },
    {
      "mid": "/m/01cbzq",
      "description": "rock",
      "score": 0.6587309
    },
    {
      "mid": "/m/025s35_",
      "description": "ravine",
      "score": 0.6046754
    },
    {
      "mid": "/m/042g2h",
      "description": "old growth forest",
      "score": 0.5984761
    },
    {
      "mid": "/m/07yxk",
      "description": "valley",
      "score": 0.5657426
    },
    {
      "mid": "/m/01y3fy",
      "description": "jungle",
      "score": 0.55235815
    }
  ],
  "gcs_url": "gs://my-storage-1703/vision-api/kumamoto_2.jpg",
  "timestamp": 1494307972.0
}

Refer to Output specifications: Vision API for more details about image analysis results. For concrete examples of results for each analysis type (object recognition, color analysis, etc.), refer to Examples of using the Vision API BLOCK.

Stream Insert BLOCK

The Stream Insert BLOCK uses BigQuery’s streaming insert function to add records to a BigQuery table. In our example, it adds the data from the variable _ we originally made in the Vision API BLOCK [2].

The Vision API BLOCK outputs its image analysis results as a variable named _. Then, the Stream Insert BLOCK takes this variable, _, as its input.

So, by connecting the Stream Insert BLOCK after the Vision API BLOCK, we can make the Vision API BLOCK’s output become the Stream Insert BLOCK’s input.

You will need to prepare your BigQuery table prior to executing this Flow. You can download this file: google-cloud-vision-api-bigquery-schema.json to use for your schema. For more details about the schema, see the Cloud Vision API reference page.

Query examples

Running queries on the results from the Vision API BLOCK in BigQuery can also lead to all sorts of useful applications.

For example, we could use the Vision API BLOCK to detect faces, then count how many faces were detected each hour to estimate the number of people who entered a store (only people whose faces were detected would be counted).

SELECT timestamp, COUNT(faceAnnotations.detectionConfidence) as face_count
FROM [vision.annotations]
GROUP BY timestamp
ORDER BY timestamp

Refer to Output specifications: Vision API > Facial recognition for details about data formatting for facial recognition results.

See below for example queries for each other image analysis type:

Landmark recognition:

SELECT
  gcs_url,
  timestamp,
  landmarkAnnotations.description as landmark}description
FROM [vision.annotations]
ORDER BY timestamp desc

Refer to Output specifications: Vision API > Landmark recognition for details about data formatting for landmark recognition results.

Logo recognition:

SELECT
  gcs_url,
  timestamp,
  GROUP_CONCAT_UNQUOTED( logoAnnotations.description, ' / ' ) as logo_description
FROM [vision.annotations]
GROUP BY
  gcs_url,
  timestamp
ORDER BY timestamp desc

Refer to Output specifications: Vision API > Logo recognition for more details about data formatting for logo recognition results.

Object recognition:

SELECT
  gcs_url,
  timestamp,
  GROUP_CONCAT_UNQUOTED( labelAnnotations.description, ' / ' ) as label_description
FROM [vision.annotations]
GROUP BY
  gcs_url,
  timestamp
ORDER BY timestamp desc

Refer to Output specifications: Vision API > Object recognition for more details about data formatting for object recognition results.

OCR (text) recognition:

SELECT
  gcs_url,
  timestamp,
  textAnnotations.description as text_description
FROM [vision.annotations]
ORDER BY timestamp desc

Refer to Output specifications: Vision API > OCR for more details about data formatting for OCR results.


[1]The variable that contains these results can be freely configured in the “Variable” property.

[2]The variable containing the data to be added can be freely configured in the “Source data variable” property.