Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.blender.org/blender.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBenoit Bolsee <benoit.bolsee@online.be>2008-11-01 01:35:52 +0300
committerBenoit Bolsee <benoit.bolsee@online.be>2008-11-01 01:35:52 +0300
commita8c4eef3265358d3d70c6c448fe4d1c4273defee (patch)
treefe640f0f6e35c65886c65c349c91a258c153a0f8 /source/gameengine/VideoTexture/FilterSource.h
parent77b4c66cc3de461fdd0074e46a3a77de1fd83447 (diff)
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to update the other compilation system but I count on the community to check and fix them. This is Zdeno Miklas video texture plugin ported to trunk. The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html) EXCEPT for the following: The module name is changed to VideoTexture (instead of blendVideoTex). A new (and only) video source is now available: VideoFFmpeg() You must pass 1 to 4 arguments when you create it (you can use named arguments): VideoFFmpeg(file) : play a video file VideoFFmpeg(file, capture, rate, width, height) : start a live video capture file: In the first form, file is a video file name, relative to startup directory. It can also be a URL, FFmpeg will happily stream a video from a network source. In the second form, file is empty or is a hint for the format of the video capture. In Windows, file is ignored and should be empty or not specified. In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394. The user specifies the type of device with the file parameter: [<device_type>][:<standard>] <device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l' <standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc' The driver name is constructed automatically from the device types: v4l : /dev/video<capture> dv1394: /dev/dv1394/<capture> If you have different driver name, you can specify the driver name explicitely instead of device type. Examples of valid file parameter: /dev/v4l/video0:pal /dev/ieee1394/1:ntsc dv1394:ntsc v4l:pal :secam capture: Defines the index number of the capture source, starting from 0. The first capture device is always 0. The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1. rate: the capture frame rate, by default 25 frames/sec width: height: Width and height of the video capture in pixel, default value 0. In Windows you must specify these values and they must fit with the capture device capability. For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480, you must specify one of these couple of values or the opening of the video source will fail. In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height. Simple example ************** 1. Texture definition script: import VideoTexture contr = GameLogic.getCurrentController() obj = contr.getOwner() if not hasattr(GameLogic, 'video'): matID = VideoTexture.materialID(obj, 'MAVideoMat') GameLogic.video = VideoTexture.Texture(obj, matID) GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg') # Streaming is also possible: #GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg') GameLogic.vidSrc.repeat = -1 # If the video dimensions are not a power of 2, scaling must be done before # sending the texture to the GPU. This is done by default with gluScaleImage() # but you can also use a faster, but less precise, scaling by setting scale # to True. Best approach is to convert the video offline and set the dimensions right. GameLogic.vidSrc.scale = True # FFmpeg always delivers the video image upside down, so flipping is enabled automatically #GameLogic.vidSrc.flip = True if contr.getSensors()[0].isPositive(): GameLogic.video.source = GameLogic.vidSrc GameLogic.vidSrc.play() 2. Texture refresh script: obj = GameLogic.getCurrentController().getOwner() if hasattr(GameLogic, 'video') != 0: GameLogic.video.refresh(True) You can download this demo here: http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
Diffstat (limited to 'source/gameengine/VideoTexture/FilterSource.h')
-rw-r--r--source/gameengine/VideoTexture/FilterSource.h233
1 files changed, 233 insertions, 0 deletions
diff --git a/source/gameengine/VideoTexture/FilterSource.h b/source/gameengine/VideoTexture/FilterSource.h
new file mode 100644
index 00000000000..b18186210dc
--- /dev/null
+++ b/source/gameengine/VideoTexture/FilterSource.h
@@ -0,0 +1,233 @@
+/* $Id$
+-----------------------------------------------------------------------------
+This source file is part of blendTex library
+
+Copyright (c) 2007 The Zdeno Ash Miklas
+
+This program is free software; you can redistribute it and/or modify it under
+the terms of the GNU Lesser General Public License as published by the Free Software
+Foundation; either version 2 of the License, or (at your option) any later
+version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+Place - Suite 330, Boston, MA 02111-1307, USA, or go to
+http://www.gnu.org/copyleft/lesser.txt.
+-----------------------------------------------------------------------------
+*/
+
+#if !defined FILTERSOURCE_H
+#define FILTERSOURCE_H
+
+#include "Common.h"
+
+#include "FilterBase.h"
+
+
+/// class for RGB24 conversion
+class FilterRGB24 : public FilterBase
+{
+public:
+ /// constructor
+ FilterRGB24 (void) {}
+ /// destructor
+ virtual ~FilterRGB24 (void) {}
+
+ /// get source pixel size
+ virtual unsigned int getPixelSize (void) { return 3; }
+
+protected:
+ /// filter pixel, source byte buffer
+ virtual unsigned int filter (unsigned char * src, short x, short y,
+ short * size, unsigned int pixSize, unsigned int val)
+ { return 0xFF000000 | src[0] << 16 | src[1] << 8 | src[2]; }
+};
+
+
+/// class for BGR24 conversion
+class FilterBGR24 : public FilterBase
+{
+public:
+ /// constructor
+ FilterBGR24 (void) {}
+ /// destructor
+ virtual ~FilterBGR24 (void) {}
+
+ /// get source pixel size
+ virtual unsigned int getPixelSize (void) { return 3; }
+
+protected:
+ /// filter pixel, source byte buffer
+ virtual unsigned int filter (unsigned char * src, short x, short y,
+ short * size, unsigned int pixSize, unsigned int val)
+ { return 0xFF000000 | src[2] << 16 | src[1] << 8 | src[0]; }
+};
+
+
+/// class for YV12 conversion
+class FilterYV12 : public FilterBase
+{
+public:
+ /// constructor
+ FilterYV12 (void) {}
+ /// destructor
+ virtual ~FilterYV12 (void) {}
+
+ /// get source pixel size
+ virtual unsigned int getPixelSize (void) { return 1; }
+
+ /// set pointers to color buffers
+ void setBuffs (unsigned char * buff, short * size)
+ {
+ unsigned int buffSize = size[0] * size[1];
+ m_buffV = buff + buffSize;
+ m_buffU = m_buffV + (buffSize >> 2);
+ m_pitchUV = size[0] >> 1;
+ }
+
+protected:
+ /// begin of V buffer
+ unsigned char * m_buffV;
+ /// begin of U buffer
+ unsigned char * m_buffU;
+ /// pitch for V & U buffers
+ short m_pitchUV;
+
+ /// interpolation function
+ int interpol (int a, int b, int c, int d)
+ { return (9 * (b + c) - a - d + 8) >> 4; }
+
+ /// common horizontal interpolation
+ int interpolH (unsigned char * src)
+ { return interpol(*(src-1), *src, *(src+1), *(src+2)); }
+
+ /// common vertical interpolation
+ int interpolV (unsigned char * src)
+ { return interpol(*(src-m_pitchUV), *src, *(src+m_pitchUV), *(src+2*m_pitchUV)); }
+
+ /// common joined vertical and horizontal interpolation
+ int interpolVH (unsigned char * src)
+ {
+ return interpol(interpolV(src-1), interpolV(src), interpolV(src+1),
+ interpolV(src+2));
+ }
+
+ /// is pixel on edge
+ bool isEdge (short x, short y, short * size)
+ { return x <= 1 || x >= size[0] - 4 || y <= 1 || y >= size[1] - 4; }
+
+ /// get the first parameter on the low edge
+ unsigned char * interParA (unsigned char * src, short x, short size, short shift)
+ { return x > 1 ? src - shift : src; }
+ /// get the third parameter on the high edge
+ unsigned char * interParC (unsigned char * src, short x, short size, short shift)
+ { return x < size - 2 ? src + shift : src; }
+ /// get the fourth parameter on the high edge
+ unsigned char * interParD (unsigned char * src, short x, short size, short shift)
+ { return x < size - 4 ? src + 2 * shift : x < size - 2 ? src + shift : src; }
+
+ /// horizontal interpolation on edges
+ int interpolEH (unsigned char * src, short x, short size)
+ {
+ return interpol(*interParA(src, x, size, 1), *src,
+ *interParC(src, x, size, 1), *interParD(src, x, size, 1));
+ }
+
+ /// vertical interpolation on edges
+ int interpolEV (unsigned char * src, short y, short size)
+ {
+ return interpol(*interParA(src, y, size, m_pitchUV), *src,
+ *interParC(src, y, size, m_pitchUV), *interParD(src, y, size, m_pitchUV));
+ }
+
+ /// joined vertical and horizontal interpolation on edges
+ int interpolEVH (unsigned char * src, short x, short y, short * size)
+ {
+ return interpol(interpolEV(interParA(src, x, size[0], 1), y, size[1]),
+ interpolEV(src, y, size[1]), interpolEV(interParC(src, x, size[0], 1), y, size[1]),
+ interpolEV(interParD(src, x, size[0], 1), y, size[1]));
+ }
+
+
+ /// filter pixel, source byte buffer
+ virtual unsigned int filter (unsigned char * src, short x, short y,
+ short * size, unsigned int pixSize, unsigned int val)
+ {
+ // V & U offset
+ long offset = (x >> 1) + m_pitchUV * (y >> 1);
+ // get modified YUV -> CDE: C = Y - 16; D = U - 128; E = V - 128
+ int c = *src - 16;
+ int d = m_buffU[offset] - 128;
+ int e = m_buffV[offset] - 128;
+ // if horizontal interpolation is needed
+ if ((x & 1) == 1)
+ // if vertical interpolation is needed too
+ if ((y & 1) == 1)
+ // if this pixel is on the edge
+ if (isEdge(x, y, size))
+ {
+ // get U & V from edge
+ d = interpolEVH(m_buffU + offset, x, y, size) - 128;
+ e = interpolEVH(m_buffV + offset, x, y, size) - 128;
+ }
+ // otherwise get U & V from inner range
+ else
+ {
+ d = interpolVH(m_buffU + offset) - 128;
+ e = interpolVH(m_buffV + offset) - 128;
+ }
+ // otherwise use horizontal interpolation only
+ else
+ // if this pixel is on the edge
+ if (isEdge(x, y, size))
+ {
+ // get U & V from edge
+ d = interpolEH(m_buffU + offset, x, size[0]) - 128;
+ e = interpolEH(m_buffV + offset, x, size[0]) - 128;
+ }
+ // otherwise get U & V from inner range
+ else
+ {
+ d = interpolH(m_buffU + offset) - 128;
+ e = interpolH(m_buffV + offset) - 128;
+ }
+ // otherwise if only vertical interpolation is needed
+ else if ((y & 1) == 1)
+ // if this pixel is on the edge
+ if (isEdge(x, y, size))
+ {
+ // get U & V from edge
+ d = interpolEV(m_buffU + offset, y, size[1]) - 128;
+ e = interpolEV(m_buffV + offset, y, size[1]) - 128;
+ }
+ // otherwise get U & V from inner range
+ else
+ {
+ d = interpolV(m_buffU + offset) - 128;
+ e = interpolV(m_buffV + offset) - 128;
+ }
+ // convert to RGB
+ // R = clip(( 298 * C + 409 * E + 128) >> 8)
+ // G = clip(( 298 * C - 100 * D - 208 * E + 128) >> 8)
+ // B = clip(( 298 * C + 516 * D + 128) >> 8)
+ int red = (298 * c + 409 * e + 128) >> 8;
+ if (red >= 0x100) red = 0xFF;
+ else if (red < 0) red = 0;
+ int green = 298 * c - 100 * d - 208 * e;
+ if (green > 0x10000) green = 0xFF00;
+ else if (green < 0) green = 0;
+ int blue = (298 * c + 516 * d + 128) << 8;
+ if (blue > 0x1000000) blue = 0xFF0000;
+ else if (blue < 0) blue = 0;
+ // return result
+ return 0xFF000000 | blue & 0xFF0000 | green & 0xFF00
+ | red & 0xFF;
+ }
+};
+
+
+#endif \ No newline at end of file