Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBenoit Bolsee <benoit.bolsee@online.be>2008-11-01 01:35:52 +0300
committerBenoit Bolsee <benoit.bolsee@online.be>2008-11-01 01:35:52 +0300
commita8c4eef3265358d3d70c6c448fe4d1c4273defee (patch)
treefe640f0f6e35c65886c65c349c91a258c153a0f8 /source/gameengine/VideoTexture/VideoFFmpeg.h
parent77b4c66cc3de461fdd0074e46a3a77de1fd83447 (diff)
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
Diffstat (limited to 'source/gameengine/VideoTexture/VideoFFmpeg.h')
-rw-r--r--source/gameengine/VideoTexture/VideoFFmpeg.h159
1 files changed, 159 insertions, 0 deletions
diff --git a/source/gameengine/VideoTexture/VideoFFmpeg.h b/source/gameengine/VideoTexture/VideoFFmpeg.h
new file mode 100644
index 00000000000..7980e06686c
--- /dev/null
+++ b/source/gameengine/VideoTexture/VideoFFmpeg.h
@@ -0,0 +1,159 @@
+/* $Id$
+-----------------------------------------------------------------------------
+This source file is part of VideoTexture library
+
+Copyright (c) 2007 The Zdeno Ash Miklas
+
+This program is free software; you can redistribute it and/or modify it under
+the terms of the GNU Lesser General Public License as published by the Free Software
+Foundation; either version 2 of the License, or (at your option) any later
+version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+Place - Suite 330, Boston, MA 02111-1307, USA, or go to
+http://www.gnu.org/copyleft/lesser.txt.
+-----------------------------------------------------------------------------
+*/
+#if !defined VIDEOFFMPEG_H
+#define VIDEOFFMPEG_H
+
+#ifdef WITH_FFMPEG
+extern "C" {
+#include <ffmpeg/avformat.h>
+#include <ffmpeg/avcodec.h>
+#include <ffmpeg/rational.h>
+#include <ffmpeg/swscale.h>
+}
+
+#if LIBAVFORMAT_VERSION_INT < (49 << 16)
+#define FFMPEG_OLD_FRAME_RATE 1
+#else
+#define FFMPEG_CODEC_IS_POINTER 1
+#endif
+
+#ifdef FFMPEG_CODEC_IS_POINTER
+static inline AVCodecContext* get_codec_from_stream(AVStream* stream)
+{
+ return stream->codec;
+}
+#else
+static inline AVCodecContext* get_codec_from_stream(AVStream* stream)
+{
+ return &stream->codec;
+}
+#endif
+
+#include "VideoBase.h"
+
+
+// type VideoFFmpeg declaration
+class VideoFFmpeg : public VideoBase
+{
+public:
+ /// constructor
+ VideoFFmpeg (HRESULT * hRslt);
+ /// destructor
+ virtual ~VideoFFmpeg ();
+
+ /// set initial parameters
+ void initParams (short width, short height, float rate);
+ /// open video file
+ virtual void openFile (char * file);
+ /// open video capture device
+ virtual void openCam (char * driver, short camIdx);
+
+ /// release video source
+ virtual bool release (void);
+
+ /// play video
+ virtual bool play (void);
+ /// stop/pause video
+ virtual bool stop (void);
+
+ /// set play range
+ virtual void setRange (double start, double stop);
+ /// set frame rate
+ virtual void setFrameRate (float rate);
+ // some specific getters and setters
+ int getPreseek(void) { return m_preseek; }
+ void setPreseek(int preseek) { if (preseek >= 0) m_preseek = preseek; }
+ bool getDeinterlace(void) { return m_deinterlace; }
+ void setDeinterlace(bool deinterlace) { m_deinterlace = deinterlace; }
+
+protected:
+
+ // format and codec information
+ AVCodec *m_codec;
+ AVFormatContext *m_formatCtx;
+ AVCodecContext *m_codecCtx;
+ // raw frame extracted from video file
+ AVFrame *m_frame;
+ // deinterlaced frame if codec requires it
+ AVFrame *m_frameDeinterlaced;
+ // decoded RGB24 frame if codec requires it
+ AVFrame *m_frameBGR;
+ // conversion from raw to RGB is done with sws_scale
+ struct SwsContext *m_imgConvertCtx;
+ // should the codec be deinterlaced?
+ bool m_deinterlace;
+ // number of frame of preseek
+ int m_preseek;
+ // order number of stream holding the video in format context
+ int m_videoStream;
+
+ // the actual frame rate
+ double m_baseFrameRate;
+
+ /// last displayed frame
+ long m_lastFrame;
+
+ /// current file pointer position in file expressed in frame number
+ long m_curPosition;
+
+ /// time of video play start
+ double m_startTime;
+
+ /// width of capture in pixel
+ short m_captWidth;
+
+ /// height of capture in pixel
+ short m_captHeight;
+
+ /// frame rate of capture in frames per seconds
+ float m_captRate;
+
+ /// image calculation
+ virtual void calcImage (unsigned int texId);
+
+ /// load frame from video
+ void loadFrame (void);
+
+ /// set actual position
+ void setPositions (void);
+
+ /// get actual framerate
+ double actFrameRate (void) { return m_frameRate * m_baseFrameRate; }
+
+ /// common function to video file and capture
+ int openStream(const char *filename, AVInputFormat *inputFormat, AVFormatParameters *formatParams);
+
+ /// check if a frame is available and load it in pFrame, return true if a frame could be retrieved
+ bool grabFrame(long frame);
+
+ /// return the frame in RGB24 format, the image data is found in AVFrame.data[0]
+ AVFrame* getFrame(void) { return m_frameBGR; }
+};
+
+inline VideoFFmpeg * getFFmpeg (PyImage * self)
+{
+ return static_cast<VideoFFmpeg*>(self->m_image);
+}
+
+#endif //WITH_FFMPEG
+
+#endif