微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

从 batch_annotate_images 的每个 AnnotateImageResponse 中视觉抓取源图像 uri

如何解决从 batch_annotate_images 的每个 AnnotateImageResponse 中视觉抓取源图像 uri

图片上传到谷歌存储桶中。下面给出的代码运行良好。它在python3中。我正在尝试为相应的 AnnotateImageResponse 捕获每个 image_uri,以便我可以将响应保存在具有相应图像 uri 的数据库中。如何为每个响应获取输入 image_uri?因为在每个响应中,源图像 uri 数据都不可用。我可能错了,但我想,在 generate_request 函数中发出请求时,我可能需要将 image_uri 作为 image_context 发送,但没有找到任何好的文档。请帮忙。

上传到谷歌存储桶中的图片文件 '39149.7ae80cfb87bb228201e251f3e234ffde.jpg','anatomy_e.jpg'

def generate_request(input_image_uri):
    if isinstance(input_image_uri,six.binary_type):
        input_image_uri = input_image_uri.decode('utf-8')
    source = {'image_uri': input_image_uri}
    image = {'source': source}
    features = [{"type_": vision.Feature.Type.LABEL_DETECTION},{"type_": vision.Feature.Type.FACE_DETECTION},{"type_": vision.Feature.Type.TEXT_DETECTION}]
    requests = {"image": image,"features": features}
    
   
    return requests


def sample_async_batch_annotate_images(input_uri):
    client = vision.ImageAnnotatorClient()
    requests = [
        generate_request(input_uri.format(filename)) for filename in ['39149.7ae80cfb87bb228201e251f3e234ffde.jpg','anatomy_e.jpg',]
    ]
    # below response is A BatchAnnotateFilesResponse instance.     
    response = client.batch_annotate_images(requests=requests)
    for each in response.responses:
        #each response is AnnotateImageResponse instance
        print("each_response",each)
    
sample_async_batch_annotate_images('gs://imagevisiontest/{}')

解决方法

我对您的 sample_async_batch_annotate_images 函数有点困惑,因为它被命名为异步,但您没有使用 Vision API 的异步方法。来自 batch_annotate_images 的响应不返回包含源 image_uri 的 context。此外,image_context 仅用于优化您的图像检测,例如提供语言提示、告诉 API 搜索产品等。

我建议使用 API 的 asynchronous batch annotate method,因为它将包含 context,它与源 image_uri 一起出现在响应的末尾。响应也保存在代码中指定的 output_uri 中。

我处理了 this image 并将其重命名为 image_text.jpeg 以在 batch_annotate_imagesasync_batch_annotate_images 上进行测试:

用于 batch_annotate_images 的代码片段:

from google.cloud import vision_v1

def sample_batch_annotate_images(
    input_image_uri="gs://your_bucket_here/image_text.jpeg",):
    client = vision_v1.ImageAnnotatorClient()

    source = {"image_uri": input_image_uri}
    image = {"source": source}
    features = [
        {"type_": vision_v1.Feature.Type.LABEL_DETECTION},{"type_": vision_v1.Feature.Type.IMAGE_PROPERTIES},]

    requests = [{"image": image,"features": features}]

    response = client.batch_annotate_images(requests=requests)
    print(response)

sample_batch_annotate_images()

来自batch_annotate_images的回复:

responses {
  label_annotations {
    mid: "/m/02wzbmj"
    description: "Standing"
    score: 0.9516521096229553
    topicality: 0.9516521096229553
  }
  label_annotations {
    mid: "/m/01mwkf"
    description: "Monochrome"
    score: 0.9407921433448792
    topicality: 0.9407921433448792
  }
  label_annotations {
    mid: "/m/01lynh"
    description: "Stairs"
    score: 0.9399806261062622
    topicality: 0.9399806261062622
  }
  label_annotations {
    mid: "/m/03scnj"
    description: "Line"
    score: 0.9328843951225281
    topicality: 0.9328843951225281
  }
  label_annotations {
    mid: "/m/012yh1"
    description: "Style"
    score: 0.9320641756057739
    topicality: 0.9320641756057739
  }
  label_annotations {
    mid: "/m/03d49p1"
    description: "Monochrome photography"
    score: 0.911144495010376
    topicality: 0.911144495010376
  }
  label_annotations {
    mid: "/m/01g6gs"
    description: "Black-and-white"
    score: 0.9031684994697571
    topicality: 0.9031684994697571
  }
  label_annotations {
    mid: "/m/019sc"
    description: "Black"
    score: 0.8788009881973267
    topicality: 0.8788009881973267
  }
  label_annotations {
    mid: "/m/030zfn"
    description: "Parallel"
    score: 0.8722482919692993
    topicality: 0.8722482919692993
  }
  label_annotations {
    mid: "/m/05wkw"
    description: "Photography"
    score: 0.8370979428291321
    topicality: 0.8370979428291321
  }
  image_properties_annotation {
    dominant_colors {
      colors {
        color {
          red: 195.0
          green: 195.0
          blue: 195.0
        }
        score: 0.4464040696620941
        pixel_fraction: 0.10618651658296585
      }
      colors {
        color {
          red: 117.0
          green: 117.0
          blue: 117.0
        }
        score: 0.16896472871303558
        pixel_fraction: 0.1623961180448532
      }
      colors {
        color {
          red: 13.0
          green: 13.0
          blue: 13.0
        }
        score: 0.12974770367145538
        pixel_fraction: 0.24307478964328766
      }
      colors {
        color {
          red: 162.0
          green: 162.0
          blue: 162.0
        }
        score: 0.11677403748035431
        pixel_fraction: 0.09510618448257446
      }
      colors {
        color {
          red: 89.0
          green: 89.0
          blue: 89.0
        }
        score: 0.08708541840314865
        pixel_fraction: 0.17659279704093933
      }
      colors {
        color {
          red: 225.0
          green: 225.0
          blue: 225.0
        }
        score: 0.05102387070655823
        pixel_fraction: 0.012119113467633724
      }
      colors {
        color {
          red: 64.0
          green: 64.0
          blue: 64.0
        }
        score: 1.7074732738819876e-07
        pixel_fraction: 0.2045244723558426
      }
    }
  }
  crop_hints_annotation {
    crop_hints {
      bounding_poly {
        vertices {
          x: 123
        }
        vertices {
          x: 226
        }
        vertices {
          x: 226
          y: 182
        }
        vertices {
          x: 123
          y: 182
        }
      }
      confidence: 0.4375000298023224
      importance_fraction: 0.794996440410614
    }
  }
}

用于 async_batch_annotate_images 的代码片段(在 Vision API docs 中获得代码):

from google.cloud import vision_v1

def sample_async_batch_annotate_images(
    input_image_uri="gs://your_bucket_here/image_text.jpeg",output_uri="gs://your_bucket_here/",):
    """Perform async batch image annotation."""
    client = vision_v1.ImageAnnotatorClient()

    source = {"image_uri": input_image_uri}
    image = {"source": source}
    features = [
        {"type_": vision_v1.Feature.Type.LABEL_DETECTION},]

    # Each requests element corresponds to a single image.  To annotate more
    # images,create a request element for each image and add it to
    # the array of requests
    requests = [{"image": image,"features": features}]
    gcs_destination = {"uri": output_uri}

    # The max number of responses to output in each JSON file
    batch_size = 2
    output_config = {"gcs_destination": gcs_destination,"batch_size": batch_size}

    operation = client.async_batch_annotate_images(requests=requests,output_config=output_config)

    print("Waiting for operation to complete...")
    response = operation.result(90)

    # The output is written to GCS with the provided output_uri as prefix
    gcs_output_uri = response.output_config.gcs_destination.uri
    print("Output written to GCS with prefix: {}".format(gcs_output_uri))

sample_async_batch_annotate_images()

来自async_batch_annotate_images的回复:

{
  "responses":[
    {
      "labelAnnotations":[
        {
          "mid":"/m/02wzbmj","description":"Standing","score":0.9516521,"topicality":0.9516521
        },{
          "mid":"/m/01mwkf","description":"Monochrome","score":0.94079214,"topicality":0.94079214
        },{
          "mid":"/m/01lynh","description":"Stairs","score":0.9399806,"topicality":0.9399806
        },{
          "mid":"/m/03scnj","description":"Line","score":0.9328844,"topicality":0.9328844
        },{
          "mid":"/m/012yh1","description":"Style","score":0.9320642,"topicality":0.9320642
        },{
          "mid":"/m/03d49p1","description":"Monochrome photography","score":0.9111445,"topicality":0.9111445
        },{
          "mid":"/m/01g6gs","description":"Black-and-white","score":0.9031685,"topicality":0.9031685
        },{
          "mid":"/m/019sc","description":"Black","score":0.878801,"topicality":0.878801
        },{
          "mid":"/m/030zfn","description":"Parallel","score":0.8722483,"topicality":0.8722483
        },{
          "mid":"/m/05wkw","description":"Photography","score":0.83709794,"topicality":0.83709794
        }
      ],"imagePropertiesAnnotation":{
        "dominantColors":{
          "colors":[
            {
              "color":{
                "red":195,"green":195,"blue":195
              },"score":0.44640407,"pixelFraction":0.10618652
            },{
              "color":{
                "red":117,"green":117,"blue":117
              },"score":0.16896473,"pixelFraction":0.16239612
            },{
              "color":{
                "red":13,"green":13,"blue":13
              },"score":0.1297477,"pixelFraction":0.24307479
            },{
              "color":{
                "red":162,"green":162,"blue":162
              },"score":0.11677404,"pixelFraction":0.095106184
            },{
              "color":{
                "red":89,"green":89,"blue":89
              },"score":0.08708542,"pixelFraction":0.1765928
            },{
              "color":{
                "red":225,"green":225,"blue":225
              },"score":0.05102387,"pixelFraction":0.0121191135
            },{
              "color":{
                "red":64,"green":64,"blue":64
              },"score":1.7074733e-07,"pixelFraction":0.20452447
            }
          ]
        }
      },"cropHintsAnnotation":{
        "cropHints":[
          {
            "boundingPoly":{
              "vertices":[
                {
                  "x":123
                },{
                  "x":226
                },{
                  "x":226,"y":182
                },{
                  "x":123,"y":182
                }
              ]
            },"confidence":0.43750003,"importanceFraction":0.79499644
          }
        ]
      },"context":{
        "uri":"gs://your_bucket_here/image_text.jpeg"
      }
    }
  ]
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。