在iOS上使用AVCodec将原始YUV420P编码为h264

我正尝试将从CMSampleBuffer收集的单个YUV420P图像编码到AVPacket以便我可以通过RTMP在networking上发送h264video。

发布的代码示例似乎工作,因为avcodec_encode_video2返回0 (成功),但got_output也是0AVPacket是空的)。

有没有人有任何经验与编码video在iOS设备上,可能知道我在做什么错了?

 - (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // sampleBuffer now contains an individual frame of raw video frames CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, 0); // access the data int width = CVPixelBufferGetWidth(pixelBuffer); int height = CVPixelBufferGetHeight(pixelBuffer); int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); // Convert the raw pixel base to h.264 format AVCodec *codec = 0; AVCodecContext *context = 0; AVFrame *frame = 0; AVPacket packet; //avcodec_init(); avcodec_register_all(); codec = avcodec_find_encoder(AV_CODEC_ID_H264); if (codec == 0) { NSLog(@"Codec not found!!"); return; } context = avcodec_alloc_context3(codec); if (!context) { NSLog(@"Context no bueno."); return; } // Bit rate context->bit_rate = 400000; // HARD CODE context->bit_rate_tolerance = 10; // Resolution context->width = width; context->height = height; // Frames Per Second context->time_base = (AVRational) {1,25}; context->gop_size = 1; //context->max_b_frames = 1; context->pix_fmt = PIX_FMT_YUV420P; // Open the codec if (avcodec_open2(context, codec, 0) < 0) { NSLog(@"Unable to open codec"); return; } // Create the frame frame = avcodec_alloc_frame(); if (!frame) { NSLog(@"Unable to alloc frame"); return; } frame->format = context->pix_fmt; frame->width = context->width; frame->height = context->height; avpicture_fill((AVPicture *) frame, rawPixelBase, context->pix_fmt, frame->width, frame->height); int got_output = 0; av_init_packet(&packet); avcodec_encode_video2(context, &packet, frame, &got_output) // Unlock the pixel data CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); // Send the data over the network [self uploadData:[NSData dataWithBytes:packet.data length:packet.size] toRTMP:self.rtmp_OutVideoStream]; } 

注意:这个代码有内存泄漏,因为我没有释放dynamic分配的内存。

UPDATE

我更新了我的代码使用@pogorskiy方法。 我只尝试上传帧,如果得到输出返回1,并清除缓冲区,一旦我完成编码video帧。