Sensifai Logo Recognition¶

the ‘Logo’ model analyzes images and returns probability scores on the likelihood that the media contains the logos of over 400 recognized brand names. This model is great for anyone building an app that relies on detecting brand logos on images. Sensifai offers one of the most accurate Deep Learning training platform to train logo recognition system and incorporate it into your application. This product lets you access Sensifai's advanced Logo and brand recognition model.

detailed Info:

the current version of our logo recognition model can recognize about 400 popular logos and brands with the accuracy of more than 90%. Some available categories in the beta version are listed below.

  • Cars brands such as Toyota, Ford, Porsche, Hyundai, Ferrari, BMW, Lexus, Audi, etc.
  • Food and restaurants: Nestle, Nescafe, Coca-Cola, Pepsi, Sprite, McDonald's, KFC, Wendy's, Subway, etc.
  • Social networks and messengers: Facebook, WeChat, etc.
  • Technology related brands: Huawei, Android, Vaio, Toshiba, Fujitsu, Apple, HP, Google, Samsung, Firefox, Microsoft, etc.
  • Sport: Puma, Nike, Underarmour, etc.
  • Fashion and beauty: H&M, Lacoste, Zara, Prada, Head & Shoulders, Dove, Nivea, etc.
  • nks: Lloyds Bank, OCBC Bank, Bank of Montreal, Bank of China, Barclays, etc.
  • Transportations and airlines: DHL, FedEx, La Poste, Boeing, British Airways, Emirates, Airbus, etc.
  • Retail stores: Amazon.com, Tesco, Costco, Walgreens, Walmart, Kroger, etc.

it's recommended to use images that satisfy the following conditions:

the model supports most of the common image formats (jpg, jpeg,...) and we do not set any limitation on the image types, however, if the algorithm cannot detect a correct format, it skips the file. also if the file is corrupted, the algorithm skips it. we do not have strict conditions on the resolution. However, very low-resolution images (lower than 224 * 224) may have a bad effect on accuracy. also, very high-resolution images take longer time for transferring and preprocessing

Creating the model

This will set up the model created during training within SageMaker to be used later for recognition. Please put in the model package arn you want to use in model_package_arn variable. This can either be a marketplace model package you subscribed to (or) one of the model packages you created in your own account.

In [15]:
import boto3
import re

import os

import sagemaker as sage
from time import gmtime, strftime
from sagemaker import get_execution_role

sess = sage.Session()
role = get_execution_role()
import uuid


sagemaker = boto3.client(service_name='sagemaker')
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
model_package_arn =  "<model_arn_here>"
model_name = "Sensifai-logo" +timestamp
model_creation = {
    "ModelName": model_name,
    "PrimaryContainer": {
        "ModelPackageName": model_package_arn
    },
    "ExecutionRoleArn":role
} 

## For Marketplace products, Network isolation flag must be set to true
model_creation['EnableNetworkIsolation'] = True

model = sagemaker.create_model(**model_creation)
sagemaker.describe_model(ModelName = model_name)
Out[15]:
{'ModelName': 'Sensifai-logo-2019-03-22-13-22-20',
 'PrimaryContainer': {'ModelPackageName': 'arn:aws:sagemaker:eu-west-1:985815980388:model-package/product-arn'},
 'ExecutionRoleArn': 'arn:aws:iam::551321136532:role/service-role/AmazonSageMaker-ExecutionRole-20180913T094034',
 'CreationTime': datetime.datetime(2019, 3, 22, 13, 22, 20, 698000, tzinfo=tzlocal()),
 'ModelArn': 'arn:aws:sagemaker:eu-west-1:551321136532:model/model-arn',
 'EnableNetworkIsolation': True,
 'ResponseMetadata': {'RequestId': '8d5b89a9-900a-47bb-98d2-08fcc8ff8e61',
  'HTTPStatusCode': 200,
  'HTTPHeaders': {'x-amzn-requestid': '8d5b89a9-900a-47bb-98d2-08fcc8ff8e61',
   'content-type': 'application/x-amz-json-1.1',
   'content-length': '466',
   'date': 'Fri, 22 Mar 2019 13:22:20 GMT'},
  'RetryAttempts': 0}}

inference with the model (Batch transform)

Now you can feed the videos to the model and save the results in output folder

In [14]:
%%time
bucket = "your bucket here"
prefix = "prefix on s3 that the test files are stored"
s3_batch_input="s3://{}/{}/".format(bucket,prefix)
#we already have transfer the data to s3, if you want to copy the files uncomment below code  
# s3_bath_input = sess.upload_data(batch_input_dir, bucket, "{}/test".format(prefix))
print("uploaded batch data files to {}".format(s3_batch_input))

timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
batch_job_name = "sensifai-logo-bt" + timestamp
batch_output = 's3://{}/logo-output'.format(bucket,prefix)

request = \
{
  "TransformJobName": batch_job_name,
  "MaxConcurrentTransforms": 0,
  "MaxPayloadInMB": 0,
  "ModelName": model_name,
  "TransformInput": {
    "DataSource": {
      "S3DataSource": {
        "S3DataType": "S3Prefix",
        "S3Uri": s3_batch_input
      }
    },
    "ContentType": "image/*",
    "CompressionType": "None",
    "SplitType": "None"
  },
  "TransformOutput": {
    "S3OutputPath": batch_output,
    "Accept": "application/json",
    "AssembleWith": "Line"
  },
  "TransformResources": {
    "InstanceType": "ml.p2.xlarge",
    "InstanceCount": 1
  }
}

sagemaker.create_transform_job(**request)

print("Created Transform job with name: ", batch_job_name)

while(True):
    job_info = sagemaker.describe_transform_job(TransformJobName=batch_job_name)
    status = job_info['TransformJobStatus']
    if status == 'Completed':
        print("Transform job ended with status: " + status)
        break
    if status == 'Failed':
        message = job_info['FailureReason']
        print('Transform failed with the following error: {}'.format(message))
        raise Exception('Transform job failed') 
    time.sleep(30)
uploaded batch data files to s3://your se path/
Created Transform job with name:  sensifai-logo-bt-2019-03-22-12-41-07
Transform job ended with status: Completed
CPU times: user 183 ms, sys: 8.85 ms, total: 192 ms
Wall time: 7min 1s

download the results¶

In [28]:
import os
import json

output_path="./output"

if not os.path.exists(output_path):
    os.makedirs(output_path)
    
!aws s3 cp $batch_output $output_path --recursive
threshold=0.4
#do anything with json files
for (dirpath, dirnames, filenames) in os.walk(output_path):
    for fileName in filenames:
        with open(os.path.join(output_path,fileName),"r") as fp:
            print("_______________________________________________")
            print("fileName:" + fileName)
            data = json.load(fp)
            for logo in data ["results"][0]['labels']:
                if logo['score']>threshold:
                    
                    print(logo)
download: s3://tmp.sensifai.com/logo-output/amazon1.jpg.out to output/amazon1.jpg.out
download: s3://tmp.sensifai.com/logo-output/nike0.jpg.out to output/nike0.jpg.out
download: s3://tmp.sensifai.com/logo-output/amazon101.jpg.out to output/amazon101.jpg.out
download: s3://tmp.sensifai.com/logo-output/toyota101.jpg.out to output/toyota101.jpg.out
_______________________________________________
fileName:toyota101.jpg.out
{'score': 0.9999991655349731, 'tag': 'toyota'}
_______________________________________________
fileName:amazon101.jpg.out
{'score': 0.9999406337738037, 'tag': 'amazon'}
_______________________________________________
fileName:nike0.jpg.out
{'score': 0.999990701675415, 'tag': 'nike'}
_______________________________________________
fileName:amazon1.jpg.out
{'score': 0.9862310290336609, 'tag': 'amazon'}

cleaning up (Optional)

In [ ]:
# optionally uncomment and run the code to clean everything up
#sagemaker.delete_model(ModelName= model_name)