读取 objective c 中的视频元数据

read video metadata in objective c

我正在尝试查看用户选择的视频文件是否为 interlaced/progressive,然后据此进行一些操作。

我已经尝试检查我提取的 cmsamplebuffer 是否定义为顶场优先或底场优先,但是对于所有输入,此 returns 为空。

NSMutableDictionary *pixBuffAttributes = [[NSMutableDictionary alloc] init];
[pixBuffAttributes setObject:
 [NSNumber numberWithInt:kCVPixelFormatType_422YpCbCr8]
 forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
myAsset = [[AVURLAsset alloc] initWithURL:urlpath options:pixBuffAttributes];
myAssetReader = [[AVAssetReader alloc] initWithAsset:myAsset error:nil];
myAssetOutput = [[AVAssetReaderTrackOutput alloc]initWithTrack:
                 [[myAsset tracksWithMediaType:AVMediaTypeVideo]
                 objectAtIndex: 0]
                outputSettings:pixBuffAttributes];
[myAssetReader addOutput:myAssetOutput];
[myAssetReader startReading];
CMSampleBufferRef ref = [myAssetOutput copyNextSampleBuffer];
if(CVBufferGetAttachments(ref, kCVImageBufferFieldDetailKey, nil) == nil)
{
    //always the case
}
else
{
    //never happens
}

无论输入文件的交错如何,上面总是returns nil。我可能正在尝试在完全错误的庄园中对此进行测试,因此非常感谢您的帮助!

感谢 jeschot on apple 开发者回答这个问题,https://forums.developer.apple.com/thread/39029 如果其他人正在尝试以正确的方式从视频中获取大部分元数据来做类似的事情:

    CGSize inputsize = [[myAsset tracksWithMediaType:AVMediaTypeVideo][0] naturalSize];

    properties->m_frame_rows = inputsize.height;
    properties->m_pixel_cols = inputsize.width;

    CFNumberRef fieldCount = CMFormatDescriptionGetExtension((CMFormatDescriptionRef)[myAsset tracksWithMediaType:AVMediaTypeVideo][0].formatDescriptions[0], kCMFormatDescriptionExtension_FieldCount);

    if([(NSNumber*) fieldCount integerValue] == 1)
    {
        properties->m_interlaced = false;
        properties->m_fld2_upper = false;
    }
    else
    {
        properties->m_interlaced = true;

        CFPropertyListRef interlace = CMFormatDescriptionGetExtension((CMFormatDescriptionRef)[myAsset tracksWithMediaType:AVMediaTypeVideo][0].formatDescriptions[0], kCMFormatDescriptionExtension_FieldDetail);

        if(interlace == kCMFormatDescriptionFieldDetail_SpatialFirstLineEarly)
        {
            properties->m_fld2_upper = false;
        }

        if(interlace == kCMFormatDescriptionFieldDetail_SpatialFirstLineLate)
        {
            properties->m_fld2_upper = true;
        }

        if(interlace == kCMFormatDescriptionFieldDetail_TemporalBottomFirst)
        {
            properties->m_fld2_upper = true;
        }

        if(interlace == kCMFormatDescriptionFieldDetail_TemporalTopFirst)
        {
            properties->m_fld2_upper = false;
        }
    }

    CMTime minDuration = [myAsset tracksWithMediaType:AVMediaTypeVideo][0].minFrameDuration;

    int64_t fpsNumerator = minDuration.value;
    int32_t fpsDenominator = minDuration.timescale;

    properties->m_ticks_duration = (unsigned int) fpsNumerator;
    if (properties->m_interlaced)
    {
        properties->m_ticks_per_second = fpsDenominator * 2;
    }
    else
    {
        properties->m_ticks_per_second = fpsDenominator;
    }

给任何对此感到困惑的人的快速说明,如果容器具有元数据(例如小于全分辨率的清晰孔径),自然尺寸并不总是图像的全分辨率。目前正在努力解决这个问题,但这是一个不同的问题!

更新:

后来我发现自然大小就是显示分辨率。要找到编码的分辨率,您需要解码第一帧并检查您获得的缓冲区对象的分辨率。在像素长宽比 !=1 或(如上所述)干净的孔径等情况下,这些可能会有所不同。