Skip to content

criteria for frameId #34

@jaceyyj

Description

@jaceyyj

I completed deployment for spatial analysis by referencing the following docs.
https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest-NonASE.json
For video URL and polygon setting, I referred to the following link.
https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json

And output stream of JSON messages were generated, which sent to my Azure blob storage. And I was trying to visualize the coordinates of bounding box from outputs on the recorded video without using .debug operation. Output, especially 'detection' part, can be visualized as an video only when frameId is taken at regular time intervals.

Here I got some problems.

  1. Looking at the events in JSON, the frameId used for each event was different.
    ('personDistanceEvent' : 1 to 820,
    'personCountEvent' : 1 to 782,
    'personZoneDwellTimeEvent' : 1375 to 9066,
    'personZoneEnterExitEvent' 1389 to 4285: ,
    'personLineEvent' : 1756, 2443,
    'cameraCalibrationEvent' : 7784, 8143)

  2. Even though I set the 'trigger' as 'interval', it was not that periodic. Timestamps were not spaced uniformly, neither was frameId.
    This is part of timestamp of personCountEvent.
    image
    This is part of frameId of perconCountEvent. Sometimes the interval is 1, sometimes 2, sometimes more.
    image

Here's my question.

  1. I would like to know if there is any criteria for cutting frames for each event.
  2. Is there another method to display the detection result on the video without using .debug operation?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions