我正在尝试重现"没有计算机的分形",而是使用计算机而不是三台投影仪.我认为使用gstreamer应该很简单:只需用tee从相机复制流,然后用视频混合器将三张相同的图片放在一起.
这里我使用' videotestsrc pattern = 1 '作为我希望复制的流,并且' videotestsrc pattern ="black" '作为整个屏幕的背景.
#!/bin/bash
gst-launch -v \
videotestsrc pattern=1 ! video/x-raw-yuv,width=200,height=200 \
! tee name=t \
videomixer name=mix \
sink_0::xpos=0 sink_0::ypos=0 \
sink_1::xpos=100 sink_1::ypos=0 \
sink_2::xpos=200 sink_2::ypos=200 \
sink_3::xpos=0 sink_3::ypos=200 \
! ffmpegcolorspace ! xvimagesink \
videotestsrc pattern="black" ! video/x-raw-yuv,width=400,height=400 \
! mix.sink_0 \
t. ! queue ! mix.sink_1 \
t. ! queue ! mix.sink_2 \
t. ! queue ! mix.sink_3 \
Run Code Online (Sandbox Code Playgroud)
问题是我只得到两个副本:一个对应于sink_1,另一个对应于sink_2.如果我交换最后两行,那么我只得到sink_1和sink_3.
那么问题是如何显示所有三个副本?
我有以下Python 2.7/PyGObject 3.0/PyGST 0.10模块:
from gi.repository import Gtk, Gdk, GdkPixbuf
import pango
import pygst
pygst.require('0.10')
import gst
import Trailcrest
import os, sys
import cairo
from math import pi
class Video:
def __init__(self):
def on_message(bus, message):
if message.type == gst.MESSAGE_EOS:
# End of Stream
player.seek(1.0, gst.FORMAT_TIME, gst.SEEK_FLAG_FLUSH, gst.SEEK_TYPE_SET, 5000000000, gst.SEEK_TYPE_NONE, 6000000000)
elif message.type == gst.MESSAGE_ERROR:
player.set_state(gst.STATE_NULL)
(err, debug) = message.parse_error()
print "Error: %s" % err, debug
def on_sync_message(bus, message):
if message.structure is None:
return False
if message.structure.get_name() == "prepare-xwindow-id":
Gdk.threads_enter()
print …Run Code Online (Sandbox Code Playgroud) 我正在尝试为Gstreamer缓冲区实现自定义队列.问题是,当我试图出列时,似乎我正在失去队列的头部.每当我尝试两次出队时,我都会遇到分段错误.我也注意到头部总是等于头部 - >下一个.现在我不确定入队或出队是否有问题.请帮帮我.谢谢.
typedef struct _GstBUFFERQUEUE GstBufferQueue;
struct _GstBUFFERQUEUE {
GstBuffer *buf;
guint buf_size;
struct _GstBUFFERQUEUE *next;
};
void enqueue_gstbuffer(GstBufferQueue **head, GstBufferQueue **tail, guint *queue_size, GstBuffer *buf)
{
if (*queue_size == 0)
{
*head = malloc(sizeof(GstBufferQueue));
(*head)->buf = gst_buffer_try_new_and_alloc (GST_BUFFER_SIZE(buf));
(*head)->buf = gst_buffer_copy(buf);
*tail = *head;
}
else
{
if ((*tail)->next = malloc(sizeof(GstBufferQueue))) {
(*tail)->next->buf = gst_buffer_try_new_and_alloc (GST_BUFFER_SIZE(buf));
(*tail)->next->buf = gst_buffer_copy(buf);
(*tail) = (*tail)->next;
}
else {
GST_WARNING("Error allocating memory for new buffer in queue");
}
}
(*tail)->next = NULL; …Run Code Online (Sandbox Code Playgroud) 我有以下管道工作正常:
gst-launch-1.0 -v filesrc location =/home/Videos/sample_h264.mov!decodebin!视频转换!autovideosink
我想写一个C程序来做同样的事情.所以我将以前的管道转换为以下代码:
pipeline = gst_pipeline_new ("video_pipeline");
if (!pipeline) {
g_print("Failed to create the pipeline\n");
return -1;
}
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
watch_id = gst_bus_add_watch (bus, bus_call, loop);
gst_object_unref (bus);
source = gst_element_factory_make ("filesrc", "file-source");
decoder = gst_element_factory_make ("decodebin", "standard-decoder");
converter = gst_element_factory_make ("videoconvert", "converter");
sink = gst_element_factory_make ("autovideosink", "video-sink");
if (!source || !decoder || !converter || !sink) {
g_print("Failed to create one or more pipeline elements\n");
return -1;
}
g_object_set(G_OBJECT(source), "location", file_name, NULL);
gst_bin_add_many …Run Code Online (Sandbox Code Playgroud) 我是Qt的新手,我正在尝试用Qt运行基本的gstreamermm示例.当我在qt的main.cpp中包含gstreamermm.h时,出现编译错误.我无法理解这个错误说的是什么.我在这个例子中使用Qt创建者.
#include <QApplication>
#include <gstreamermm.h>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
MainWindow w;
w.show();
return a.exec();
}
Run Code Online (Sandbox Code Playgroud)
我得到以下编译错误
g++ -c -pipe -g -pthread -Wall -W -D_REENTRANT -fPIE -DQT_QML_DEBUG -DQT_DECLARATIVE_DEBUG -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I../../../../Qt5.1.0/5.1.0/gcc_64/mkspecs/linux-g++ -I../PlayerBasic -I/usr/include/giomm-2.4 -I/usr/lib/x86_64-linux-gnu/giomm-2.4/include -I/usr/include/gstreamer-0.10 -I/usr/include/glibmm-2.4 -I/usr/lib/x86_64-linux-gnu/glibmm-2.4/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/sigc++-2.0 -I/usr/lib/x86_64-linux-gnu/sigc++-2.0/include -I/usr/include/libxml2 -I/usr/include/gstreamermm-0.10 -I/usr/lib/gstreamermm-0.10/include -I/usr/include/libxml++-2.6 -I/usr/lib/libxml++-2.6/include -I../../../../Qt5.1.0/5.1.0/gcc_64/include -I../../../../Qt5.1.0/5.1.0/gcc_64/include/QtWidgets -I../../../../Qt5.1.0/5.1.0/gcc_64/include/QtGui -I../../../../Qt5.1.0/5.1.0/gcc_64/include/QtCore -I. -I. -I. -o main.o ../PlayerBasic/main.cpp
In file included from /usr/include/glibmm-2.4/glibmm.h:92:0,
from /usr/include/gstreamermm-0.10/gstreamermm/bin.h:7,
from /usr/include/gstreamermm-0.10/gstreamermm.h:65,
from ../PlayerBasic/main.cpp:3:
/usr/include/glibmm-2.4/glibmm/balancedtree.h:225:40: error: macro "Q_FOREACH" requires 2 arguments, but …Run Code Online (Sandbox Code Playgroud) 我一直在研究gstreamer applemedia编码器插件,并改进了基于VideoToolbox的视频编码.运行gstreamer管道,如:
$ gst-launch-1.0 filesrc location=source.avi ! decodebin ! vtenc_h264 ! h264parse ! qtmux name=mux ! filesink location=sink.mp4
Run Code Online (Sandbox Code Playgroud)
VTCompressionSession在Mac OS系统上编码h264视频时,我期待看到非常低的CPU使用率.然而,在我测试过的系统上:2009年中期Macbook Pro配备GeForce 9600M和2011年中期Mac mini和Radeon HD 6630M,编码仍然消耗80%到130%的CPU - 这表明它不是硬件加速的.
在哪些硬件配置上,或给定哪些压缩参数(例如哪些kVTCompressionPropertyKey_ProfileLevel)确实VTCompressionSession使用硬件加速编码?
macos video-encoding gstreamer hardware-acceleration core-video
我有一个罗技网络摄像头,当我列出它显示的可用格式(以及其他)以下内容:
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Run Code Online (Sandbox Code Playgroud)
所以现在我要捕获300帧,分辨率为640x480 @ 30fps,jpeg压缩它们并将其复用到avi.捕获300帧@ 30fps应该会产生10秒的电影并且需要10秒才能录制,但在我的情况下,需要大约40秒才能获得300帧但是它会产生预期的10秒视频.
这是我的管道:
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=300 ! \
"video/x-raw,width=640,framerate=30/1" ! jpegenc ! avimux ! \
filesink location=output.avi
Run Code Online (Sandbox Code Playgroud)
我用fpsdisplaysink检查了,很多帧被删除了:
last-message = rendered: 48, dropped: 250, fps: …Run Code Online (Sandbox Code Playgroud) 我正在使用gstreamer连接到流式视频,该视频是原始UDP多播上的原始H.264基本流.我发现当我只有 eth0它时,它连接得很好:
gst-launch udpsrc uri=udp://239.255.43.43:4444 ! h264parse ! ffdec_h264 ! xvimagesink sync=false
Run Code Online (Sandbox Code Playgroud)
但是,当我提出这两个问题时wlan0,eth0我会遇到问题.我使用wlan0我的主要互联网连接,eth0并在我的本地局域网上,流媒体视频服务器.我有wlan0默认路线:
host$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 wlan0
Run Code Online (Sandbox Code Playgroud)
当我尝试连接此配置时,gstreamer只是坐在select()呼叫上等待连接.
我在制作一个简单的服务器/客户端套接字python程序时遇到麻烦。基本上,我的服务器(RPi3)必须将视频(使用Gstreamer)流式传输到客户端(Fedora 24)。问题在于,在我的Fedora中,我可以使用Gstreamer像这样导入库:
import gi
gi.require_version('Gst', '1.0')
gi.require_version('Gtk', '3.0')
from gi.repository import Gst, GObject, Gtk
Run Code Online (Sandbox Code Playgroud)
但是在我的Raspbian中,我不能这样做,因为:
Traceback (most recent call last):
File "peerMain.py", line 12, in <module>
gi.require_version('Gst', '1.0')
File "/usr/lib/python2.7/dist-packages/gi/__init__.py", line 100, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gst not available
Run Code Online (Sandbox Code Playgroud)
我尝试了很多事情,例如import gst或pygst。我也尝试安装一些数据包
sudo apt-get install libgstreamer1.0-dev libgstreamer1.0-0-dbg libgstreamer1.0-0 gstreamer1.0-tools gstreamer-tools gstreamer1.0-doc gstreamer1.0-x
Run Code Online (Sandbox Code Playgroud)
但是结果是
gstreamer1.0-tools is already the newest version.
gstreamer1.0-x is already the newest version.
libgstreamer1.0-0 is already the newest version.
libgstreamer1.0-0 set to …Run Code Online (Sandbox Code Playgroud) 我试图将opencv图像放入python中的gstreamer rtsp服务器.我在mediafactory中写了一些问题,我是gst-rtsp-server ancd的新手,这里有很少的文档,所以我不确切知道我是否使用了正确的方法.我正在使用一个线程来启动MainLoop,我正在使用主线程创建一个缓冲区来推入mediafactory管道的appsrc元素.我使用正确的方法来获得我的目标吗?谁能帮我?我的代码如下:
from threading import Thread
from time import clock
import cv2
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import Gst, GstRtspServer, GObject
class SensorFactory(GstRtspServer.RTSPMediaFactory):
def __init__(self, **properties):
super(SensorFactory, self).__init__(**properties)
self.launch_string = 'appsrc ! video/x-raw,width=320,height=240,framerate=30/1 ' \
'! videoconvert ! x264enc speed-preset=ultrafast tune=zerolatency ' \
'! rtph264pay config-interval=1 name=pay0 pt=96'
self.pipeline = Gst.parse_launch(self.launch_string)
self.appsrc = self.pipeline.get_child_by_index(4)
def do_create_element(self, url):
return self.pipeline
class GstServer(GstRtspServer.RTSPServer):
def __init__(self, **properties):
super(GstServer, self).__init__(**properties)
self.factory = SensorFactory()
self.factory.set_shared(True)
self.get_mount_points().add_factory("/test", self.factory)
self.attach(None)
GObject.threads_init()
Gst.init(None)
server …Run Code Online (Sandbox Code Playgroud)