If you try ffmpeg from command line, it will work: ffmpeg -i first.mp4 -i second.mp4 -filter_complex "pad=1008:734:144:0:black overlay=0:576" -map "" -map 0:a output.mp4īasically, increasing the overall size of first video, then overlapping the second one. get the filter contexts from the graph hereįor my case I had a transformation like this: pad=1008:734:144:0:black overlay=0:576 ReturnCode = avfilter_graph_parse2(graph, filters, &gis, &gos) Ĭs_printAVError("Cannot parse graph.", returnCode) Ĭs_printAVError("Cannot configure graph.", returnCode) Int initFilters(AVFrame *bgFrame, int inputCount, AVCodecContext **codecContexts)
#VDMX VIDEO MULTIPLE OUTPUTS CODE#
These are used to add/get frames from the filter graph.Ī simplified version of the code looks like this: AVFilterContext **inputContexts The filter contexts for the input buffers and for the output buffer can be retrieved from the graph itself at the end. The avfilter_graph_parse2 will make all the graph connections and initialize all the filters. So, in the simple case where you have one background image and one input video the value of the filters parameter should look like this:īuffer=video_size=1024x768:pix_fmt=2:time_base=1/25:pixel_aspect=3937/3937 buffer=video_size=1920x1080:pix_fmt=0:time_base=1/180000:pixel_aspect=0/1 overlay=0:0 buffersink This involves replacing the avfilter_graph_parse_ptr with avfilter_graph_parse2 and adding the buffer and buffersink filters to the filters parameter of avfilter_graph_parse2. I have found a simple solution to the problem. How do I implement a filter graph with multiple inputs?
None of them work and I have not been able to find any documentation related to this. I have tried with different versions: in_1, in_link_1. It seems that the name of the AVFilterInOut structure required by avfilter_graph_parse_ptr needs to be in. I did some testing with a scale filter and a single input. The inputs parameter represents a list of the current inputs of the graph, in this case this is the graphOutput variable, because it represents the input to the buffersink filter. The ouputs parameter represents a list of the current outputs of the graph, in my case that being the graphInputs variable, because these are the outputs from the buffer filter. Looks like the description of avfilter_graph_parse_ptr is a bit vague. What is it that I am not doing correctly?ĮDIT: The are two issues that I have discovered: Output pad "default" with type video of the filter instance "background" of buffer not connected to any destination and the error Invalid argument. The call breaks after the call to avfilter_graph_config with the warning: The filters argument of the function is passed on to avfilter_graph_parse_ptr and it can looks like this: scale=512x512 scale=256x256 overlay=0:0
ReturnCode = avfilter_graph_config(graph, NULL) ReturnCode = avfilter_graph_parse_ptr(graph, filters, graphInputs, &graphOutput, NULL) If(returnCode filter_ctx = outputContext ReturnCode = avfilter_graph_create_filter(&outputContext, bufferSink, "out", NULL, NULL, graph) ReturnCode = avfilter_graph_create_filter(&inputContexts, bufferSrc, name, args, NULL, graph) Snprintf(name, sizeof(name), "video_%d", i) GraphInputs->name = av_strdup("background") įor(i = 1 i width, codecCtx->height, codecCtx->pix_fmt,ĬodecCtx->time_base.num, codecCtx->time_n,ĬodecCtx->sample_aspect_ratio.num, codecCtx->sample_aspect_n) If(returnCode filter_ctx = inputContexts ReturnCode = avfilter_graph_create_filter(&inputContexts, bufferSrc, "background", args, NULL, graph) GraphInputs = av_calloc(inputCount + 1, sizeof(AVFilterInOut *)) įor(i = 0 i width, bgFrame->height, bgFrame->format) Int initFilters(AVFrame *bgFrame, int inputCount, AVCodecContext **codecContexts, char *filters)ĪVFilter *bufferSrc = avfilter_get_by_name("buffer") ĪVFilter *bufferSink = avfilter_get_by_name("buffersink") Here is my code: AVFilterContext **inputContexts I have tried to find other examples, but looks like this is the only one. I have to handle multiple video inputs and I am not sure that my solution is the correct one. In the ffmpeg filtering sample there seems to be a single video input. I have looked at the sample that comes with ffmpeg and implemented my code based on that, but things don't seem to be working as expected. Basically, I want to overlay multiple video sources on top of a static image.
#VDMX VIDEO MULTIPLE OUTPUTS ANDROID#
I am trying to use the overlay filter with multiple input sources, for an Android app.