diff options
author | Campbell Barton <ideasman42@gmail.com> | 2012-06-14 03:31:47 +0400 |
---|---|---|
committer | Campbell Barton <ideasman42@gmail.com> | 2012-06-14 03:31:47 +0400 |
commit | bde7e6c96b9e180b293ee6e49ab813a30fac0635 (patch) | |
tree | cf9f09aa0b3bb0528553546674269f0e5e96bd93 /source/blender/compositor/COM_compositor.h | |
parent | 906b9e0584b93094b1c45514fbf6fd8c62e6d015 (diff) |
stule cleanup: node headers
Diffstat (limited to 'source/blender/compositor/COM_compositor.h')
-rw-r--r-- | source/blender/compositor/COM_compositor.h | 486 |
1 files changed, 243 insertions, 243 deletions
diff --git a/source/blender/compositor/COM_compositor.h b/source/blender/compositor/COM_compositor.h index b33a48464e1..4789fed2efd 100644 --- a/source/blender/compositor/COM_compositor.h +++ b/source/blender/compositor/COM_compositor.h @@ -21,256 +21,256 @@ */ #ifdef __cplusplus - extern "C" { +extern "C" { #endif #include "DNA_node_types.h" /** - * @defgroup Model The data model of the compositor - * @defgroup Memory The memory management stuff - * @defgroup Execution The execution logic - * @defgroup Conversion Conversion logic - * @defgroup Node All nodes of the compositor - * @defgroup Operation All operations of the compositor - * - * @mainpage Introduction of the Blender Compositor - * - * @section bcomp Blender compositor - * This project redesigns the interals of Blender's compositor. The project has been executed in 2011 by At Mind. - * At Mind is a technology company located in Amsterdam, The Netherlands. - * The project has been crowdfunded. This code has been released under GPL2 to be used in Blender. - * - * @section goals The goals of the project - * the new compositor has 2 goals. - * - Make a faster compositor (speed of calculation) - * - Make the compositor work faster for you (workflow) - * - * @section speed Faster compositor - * The speedup has been done by making better use of the hardware Blenders is working on. The previous compositor only - * used a single threaded model to calculate a node. The only exception to this is the Defocus node. - * Only when it is possible to calculate two full nodes in parallel a second thread was used. - * Current workstations have 8-16 threads available, and most of the time these are idle. - * - * In the new compositor we want to use as much of threads as possible. Even new OpenCL capable GPU-hardware can be - * used for calculation. - * - * @section workflow Work faster - * The previous compositor only showed the final image. The compositor could wait a long time before seeing the result - * of his work. The new compositor will work in a way that it will focus on getting information back to the user. - * It will prioritise its work to get earlier user feedback. - * - * @page memory Memory model - * The main issue is the type of memory model to use. Blender is used by consumers and professionals. - * Ranging from low-end machines to very high-end machines. - * The system should work on high-end machines and on low-end machines. - * - * - * @page executing Executing - * @section prepare Prepare execution - * - * during the preparation of the execution All ReadBufferOperation will receive an offset. - * This offset is used during execution as an optimization trick - * Next all operations will be initialized for execution @see NodeOperation.initExecution - * Next all ExecutionGroup's will be initialized for execution @see ExecutionGroup.initExecution - * this all is controlled from @see ExecutionSystem.execute - * - * @section priority Render priority - * Render priority is an priority of an output node. A user has a different need of Render priorities of output nodes - * than during editing. - * for example. the Active ViewerNode has top priority during editing, but during rendering a CompositeNode has. - * All NodeOperation has a setting for their renderpriority, but only for output NodeOperation these have effect. - * In ExecutionSystem.execute all priorities are checked. For every priority the ExecutionGroup's are check if the - * priority do match. - * When match the ExecutionGroup will be executed (this happens in serial) - * - * @see ExecutionSystem.execute control of the Render priority - * @see NodeOperation.getRenderPriority receive the render priority - * @see ExecutionGroup.execute the main loop to execute a whole ExecutionGroup - * - * @section order Chunk order - * - * When a ExecutionGroup is executed, first the order of chunks are determined. - * The settings are stored in the ViewerNode inside the ExecutionGroup. ExecutionGroups that have no viewernode, - * will use a default one. - * There are several possible chunk orders - * - [@ref OrderOfChunks.COM_TO_CENTER_OUT]: Start calculating from a configurable point and order by nearest chunk - * - [@ref OrderOfChunks.COM_TO_RANDOM]: Randomize all chunks. - * - [@ref OrderOfChunks.COM_TO_TOP_DOWN]: Start calculation from the bottom to the top of the image - * - [@ref OrderOfChunks.COM_TO_RULE_OF_THIRDS]: Experimental order based on 9 hotspots in the image - * - * When the chunkorder is determined, the first few chunks will be checked if they can be scheduled. - * Chunks can have three states: - * - [@ref ChunkExecutionState.COM_ES_NOT_SCHEDULED]: Chunk is not yet scheduled, or dependacies are not met - * - [@ref ChunkExecutionState.COM_ES_SCHEDULED]: All dependacies are met, chunk is scheduled, but not finished - * - [@ref ChunkExecutionState.COM_ES_EXECUTED]: Chunk is finished - * - * @see ExecutionGroup.execute - * @see ViewerBaseOperation.getChunkOrder - * @see OrderOfChunks - * - * @section interest Area of interest - * An ExecutionGroup can have dependancies to other ExecutionGroup's. Data passing from one ExecutionGroup to another - * one are stored in 'chunks'. - * If not all input chunks are available the chunk execution will not be scheduled. - * <pre> - * +-------------------------------------+ +--------------------------------------+ - * | ExecutionGroup A | | ExecutionGroup B | - * | +----------------+ +-------------+ | | +------------+ +-----------------+ | - * | | NodeOperation a| | WriteBuffer | | | | ReadBuffer | | ViewerOperation | | - * | | *==* Operation | | | | Operation *===* | | - * | | | | | | | | | | | | - * | +----------------+ +-------------+ | | +------------+ +-----------------+ | - * | | | | | | - * +--------------------------------|----+ +---|----------------------------------+ - * | | - * | | - * +---------------------------+ - * | MemoryProxy | - * | +----------+ +---------+ | - * | | Chunk a | | Chunk b | | - * | | | | | | - * | +----------+ +---------+ | - * | | - * +---------------------------+ - * </pre> - * - * In the above example ExecutionGroup B has an outputoperation (ViewerOperation) and is being executed. - * The first chunk is evaluated [@ref ExecutionGroup.scheduleChunkWhenPossible], - * but not all input chunks are available. The relevant ExecutionGroup (that can calculate the missing chunks; - * ExecutionGroup A) is asked to calculate the area ExecutionGroup B is missing. - * [@ref ExecutionGroup.scheduleAreaWhenPossible] - * ExecutionGroup B checks what chunks the area spans, and tries to schedule these chunks. - * If all input data is available these chunks are scheduled [@ref ExecutionGroup.scheduleChunk] - * - * <pre> - * - * +-------------------------+ +----------------+ +----------------+ - * | ExecutionSystem.execute | | ExecutionGroup | | ExecutionGroup | - * +-------------------------+ | (B) | | (A) | - * O +----------------+ +----------------+ - * O | | - * O ExecutionGroup.execute | | - * O------------------------------->O | - * . O | - * . O-------\ | - * . . | ExecutionGroup.scheduleChunkWhenPossible - * . . O----/ (*) | - * . . O | - * . . O | - * . . O ExecutionGroup.scheduleAreaWhenPossible| - * . . O---------------------------------------->O - * . . . O----------\ ExecutionGroup.scheduleChunkWhenPossible - * . . . . | (*) - * . . . . O-------/ - * . . . . O - * . . . . O - * . . . . O-------\ ExecutionGroup.scheduleChunk - * . . . . . | - * . . . . . O----/ - * . . . . O<=O - * . . . O<=O - * . . . O - * . . O<========================================O - * . . O | - * . O<=O | - * . O | - * . O | - * </pre> - * - * This happens until all chunks of (ExecutionGroup B) are finished executing or the user break's the process. - * - * NodeOperation like the ScaleOperation can influence the area of interest by reimplementing the - * [@ref NodeOperation.determineAreaOfInterest] method - * - * <pre> - * - * +--------------------------+ +---------------------------------+ - * | ExecutionGroup A | | ExecutionGroup B | - * | | | | - * +--------------------------+ +---------------------------------+ - * Needed chunks from ExecutionGroup A | Chunk of ExecutionGroup B (to be evaluated) - * +-------+ +-------+ | +--------+ - * |Chunk 1| |Chunk 2| +----------------+ |Chunk 1 | - * | | | | | ScaleOperation | | | - * +-------+ +-------+ +----------------+ +--------+ - * - * +-------+ +-------+ - * |Chunk 3| |Chunk 4| - * | | | | - * +-------+ +-------+ - * - * </pre> - * - * @see ExecutionGroup.execute Execute a complete ExecutionGroup. Halts until finished or breaked by user - * @see ExecutionGroup.scheduleChunkWhenPossible Tries to schedule a single chunk, - * checks if all input data is available. Can trigger dependant chunks to be calculated - * @see ExecutionGroup.scheduleAreaWhenPossible Tries to schedule an area. This can be multiple chunks - * (is called from [@ref ExecutionGroup.scheduleChunkWhenPossible]) - * @see ExecutionGroup.scheduleChunk Schedule a chunk on the WorkScheduler - * @see NodeOperation.determineDependingAreaOfInterest Influence the area of interest of a chunk. - * @see WriteBufferOperation NodeOperation to write to a MemoryProxy/MemoryBuffer - * @see ReadBufferOperation NodeOperation to read from a MemoryProxy/MemoryBuffer - * @see MemoryProxy proxy for information about memory image (a image consist out of multiple chunks) - * @see MemoryBuffer Allocated memory for a single chunk - * - * @section workscheduler WorkScheduler - * the WorkScheduler is implemented as a static class. the responsibility of the WorkScheduler is to balance - * WorkPackages to the available and free devices. - * the workscheduler can work in 2 states. For witching these between the state you need to recompile blender - * - * @subsection multithread Multi threaded - * Default the workscheduler will place all work as WorkPackage in a queue. - * For every CPUcore a working thread is created. These working threads will ask the WorkScheduler if there is work - * for a specific Device. - * the workscheduler will find work for the device and the device will be asked to execute the WorkPackage + * @defgroup Model The data model of the compositor + * @defgroup Memory The memory management stuff + * @defgroup Execution The execution logic + * @defgroup Conversion Conversion logic + * @defgroup Node All nodes of the compositor + * @defgroup Operation All operations of the compositor + * + * @mainpage Introduction of the Blender Compositor + * + * @section bcomp Blender compositor + * This project redesigns the interals of Blender's compositor. The project has been executed in 2011 by At Mind. + * At Mind is a technology company located in Amsterdam, The Netherlands. + * The project has been crowdfunded. This code has been released under GPL2 to be used in Blender. + * + * @section goals The goals of the project + * the new compositor has 2 goals. + * - Make a faster compositor (speed of calculation) + * - Make the compositor work faster for you (workflow) + * + * @section speed Faster compositor + * The speedup has been done by making better use of the hardware Blenders is working on. The previous compositor only + * used a single threaded model to calculate a node. The only exception to this is the Defocus node. + * Only when it is possible to calculate two full nodes in parallel a second thread was used. + * Current workstations have 8-16 threads available, and most of the time these are idle. + * + * In the new compositor we want to use as much of threads as possible. Even new OpenCL capable GPU-hardware can be + * used for calculation. + * + * @section workflow Work faster + * The previous compositor only showed the final image. The compositor could wait a long time before seeing the result + * of his work. The new compositor will work in a way that it will focus on getting information back to the user. + * It will prioritise its work to get earlier user feedback. + * + * @page memory Memory model + * The main issue is the type of memory model to use. Blender is used by consumers and professionals. + * Ranging from low-end machines to very high-end machines. + * The system should work on high-end machines and on low-end machines. + * + * + * @page executing Executing + * @section prepare Prepare execution + * + * during the preparation of the execution All ReadBufferOperation will receive an offset. + * This offset is used during execution as an optimization trick + * Next all operations will be initialized for execution @see NodeOperation.initExecution + * Next all ExecutionGroup's will be initialized for execution @see ExecutionGroup.initExecution + * this all is controlled from @see ExecutionSystem.execute + * + * @section priority Render priority + * Render priority is an priority of an output node. A user has a different need of Render priorities of output nodes + * than during editing. + * for example. the Active ViewerNode has top priority during editing, but during rendering a CompositeNode has. + * All NodeOperation has a setting for their renderpriority, but only for output NodeOperation these have effect. + * In ExecutionSystem.execute all priorities are checked. For every priority the ExecutionGroup's are check if the + * priority do match. + * When match the ExecutionGroup will be executed (this happens in serial) + * + * @see ExecutionSystem.execute control of the Render priority + * @see NodeOperation.getRenderPriority receive the render priority + * @see ExecutionGroup.execute the main loop to execute a whole ExecutionGroup + * + * @section order Chunk order + * + * When a ExecutionGroup is executed, first the order of chunks are determined. + * The settings are stored in the ViewerNode inside the ExecutionGroup. ExecutionGroups that have no viewernode, + * will use a default one. + * There are several possible chunk orders + * - [@ref OrderOfChunks.COM_TO_CENTER_OUT]: Start calculating from a configurable point and order by nearest chunk + * - [@ref OrderOfChunks.COM_TO_RANDOM]: Randomize all chunks. + * - [@ref OrderOfChunks.COM_TO_TOP_DOWN]: Start calculation from the bottom to the top of the image + * - [@ref OrderOfChunks.COM_TO_RULE_OF_THIRDS]: Experimental order based on 9 hotspots in the image + * + * When the chunkorder is determined, the first few chunks will be checked if they can be scheduled. + * Chunks can have three states: + * - [@ref ChunkExecutionState.COM_ES_NOT_SCHEDULED]: Chunk is not yet scheduled, or dependacies are not met + * - [@ref ChunkExecutionState.COM_ES_SCHEDULED]: All dependacies are met, chunk is scheduled, but not finished + * - [@ref ChunkExecutionState.COM_ES_EXECUTED]: Chunk is finished + * + * @see ExecutionGroup.execute + * @see ViewerBaseOperation.getChunkOrder + * @see OrderOfChunks + * + * @section interest Area of interest + * An ExecutionGroup can have dependancies to other ExecutionGroup's. Data passing from one ExecutionGroup to another + * one are stored in 'chunks'. + * If not all input chunks are available the chunk execution will not be scheduled. + * <pre> + * +-------------------------------------+ +--------------------------------------+ + * | ExecutionGroup A | | ExecutionGroup B | + * | +----------------+ +-------------+ | | +------------+ +-----------------+ | + * | | NodeOperation a| | WriteBuffer | | | | ReadBuffer | | ViewerOperation | | + * | | *==* Operation | | | | Operation *===* | | + * | | | | | | | | | | | | + * | +----------------+ +-------------+ | | +------------+ +-----------------+ | + * | | | | | | + * +--------------------------------|----+ +---|----------------------------------+ + * | | + * | | + * +---------------------------+ + * | MemoryProxy | + * | +----------+ +---------+ | + * | | Chunk a | | Chunk b | | + * | | | | | | + * | +----------+ +---------+ | + * | | + * +---------------------------+ + * </pre> + * + * In the above example ExecutionGroup B has an outputoperation (ViewerOperation) and is being executed. + * The first chunk is evaluated [@ref ExecutionGroup.scheduleChunkWhenPossible], + * but not all input chunks are available. The relevant ExecutionGroup (that can calculate the missing chunks; + * ExecutionGroup A) is asked to calculate the area ExecutionGroup B is missing. + * [@ref ExecutionGroup.scheduleAreaWhenPossible] + * ExecutionGroup B checks what chunks the area spans, and tries to schedule these chunks. + * If all input data is available these chunks are scheduled [@ref ExecutionGroup.scheduleChunk] + * + * <pre> + * + * +-------------------------+ +----------------+ +----------------+ + * | ExecutionSystem.execute | | ExecutionGroup | | ExecutionGroup | + * +-------------------------+ | (B) | | (A) | + * O +----------------+ +----------------+ + * O | | + * O ExecutionGroup.execute | | + * O------------------------------->O | + * . O | + * . O-------\ | + * . . | ExecutionGroup.scheduleChunkWhenPossible + * . . O----/ (*) | + * . . O | + * . . O | + * . . O ExecutionGroup.scheduleAreaWhenPossible| + * . . O---------------------------------------->O + * . . . O----------\ ExecutionGroup.scheduleChunkWhenPossible + * . . . . | (*) + * . . . . O-------/ + * . . . . O + * . . . . O + * . . . . O-------\ ExecutionGroup.scheduleChunk + * . . . . . | + * . . . . . O----/ + * . . . . O<=O + * . . . O<=O + * . . . O + * . . O<========================================O + * . . O | + * . O<=O | + * . O | + * . O | + * </pre> + * + * This happens until all chunks of (ExecutionGroup B) are finished executing or the user break's the process. + * + * NodeOperation like the ScaleOperation can influence the area of interest by reimplementing the + * [@ref NodeOperation.determineAreaOfInterest] method + * + * <pre> + * + * +--------------------------+ +---------------------------------+ + * | ExecutionGroup A | | ExecutionGroup B | + * | | | | + * +--------------------------+ +---------------------------------+ + * Needed chunks from ExecutionGroup A | Chunk of ExecutionGroup B (to be evaluated) + * +-------+ +-------+ | +--------+ + * |Chunk 1| |Chunk 2| +----------------+ |Chunk 1 | + * | | | | | ScaleOperation | | | + * +-------+ +-------+ +----------------+ +--------+ + * + * +-------+ +-------+ + * |Chunk 3| |Chunk 4| + * | | | | + * +-------+ +-------+ + * + * </pre> + * + * @see ExecutionGroup.execute Execute a complete ExecutionGroup. Halts until finished or breaked by user + * @see ExecutionGroup.scheduleChunkWhenPossible Tries to schedule a single chunk, + * checks if all input data is available. Can trigger dependant chunks to be calculated + * @see ExecutionGroup.scheduleAreaWhenPossible Tries to schedule an area. This can be multiple chunks + * (is called from [@ref ExecutionGroup.scheduleChunkWhenPossible]) + * @see ExecutionGroup.scheduleChunk Schedule a chunk on the WorkScheduler + * @see NodeOperation.determineDependingAreaOfInterest Influence the area of interest of a chunk. + * @see WriteBufferOperation NodeOperation to write to a MemoryProxy/MemoryBuffer + * @see ReadBufferOperation NodeOperation to read from a MemoryProxy/MemoryBuffer + * @see MemoryProxy proxy for information about memory image (a image consist out of multiple chunks) + * @see MemoryBuffer Allocated memory for a single chunk + * + * @section workscheduler WorkScheduler + * the WorkScheduler is implemented as a static class. the responsibility of the WorkScheduler is to balance + * WorkPackages to the available and free devices. + * the workscheduler can work in 2 states. For witching these between the state you need to recompile blender + * + * @subsection multithread Multi threaded + * Default the workscheduler will place all work as WorkPackage in a queue. + * For every CPUcore a working thread is created. These working threads will ask the WorkScheduler if there is work + * for a specific Device. + * the workscheduler will find work for the device and the device will be asked to execute the WorkPackage - * @subsection singlethread Single threaded - * For debugging reasons the multi-threading can be disabled. This is done by changing the COM_CURRENT_THREADING_MODEL - * to COM_TM_NOTHREAD. When compiling the workscheduler - * will be changes to support no threading and run everything on the CPU. - * - * @section devices Devices - * A Device within the compositor context is a Hardware component that can used to calculate chunks. - * This chunk is encapseled in a WorkPackage. - * the WorkScheduler controls the devices and selects the device where a WorkPackage will be calculated. - * - * @subsection WS_Devices Workscheduler - * The WorkScheduler controls all Devices. When initializing the compositor the WorkScheduler selects - * all devices that will be used during compositor. - * There are two types of Devices, CPUDevice and OpenCLDevice. - * When an ExecutionGroup schedules a Chunk the schedule method of the WorkScheduler - * The Workscheduler determines if the chunk can be run on an OpenCLDevice - * (and that there are available OpenCLDevice). If this is the case the chunk will be added to the worklist for - * OpenCLDevice's - * otherwise the chunk will be added to the worklist of CPUDevices. - * - * A thread will read the work-list and sends a workpackage to its device. - * - * @see WorkScheduler.schedule method that is called to schedule a chunk - * @see Device.execute method called to execute a chunk - * - * @subsection CPUDevice CPUDevice - * When a CPUDevice gets a WorkPackage the Device will get the inputbuffer that is needed to calculate the chunk. - * Allocation is already done by the ExecutionGroup. - * The outputbuffer of the chunk is being created. - * The OutputOperation of the ExecutionGroup is called to execute the area of the outputbuffer. - * - * @see ExecutionGroup - * @see NodeOperation.executeRegion executes a single chunk of a NodeOperation - * @see CPUDevice.execute - * - * @subsection GPUDevice OpenCLDevice - * - * To be completed! - * @see NodeOperation.executeOpenCLRegion - * @see OpenCLDevice.execute - * - * @section executePixel executing a pixel - * Finally the last step, the node functionality :) + * @subsection singlethread Single threaded + * For debugging reasons the multi-threading can be disabled. This is done by changing the COM_CURRENT_THREADING_MODEL + * to COM_TM_NOTHREAD. When compiling the workscheduler + * will be changes to support no threading and run everything on the CPU. + * + * @section devices Devices + * A Device within the compositor context is a Hardware component that can used to calculate chunks. + * This chunk is encapseled in a WorkPackage. + * the WorkScheduler controls the devices and selects the device where a WorkPackage will be calculated. + * + * @subsection WS_Devices Workscheduler + * The WorkScheduler controls all Devices. When initializing the compositor the WorkScheduler selects + * all devices that will be used during compositor. + * There are two types of Devices, CPUDevice and OpenCLDevice. + * When an ExecutionGroup schedules a Chunk the schedule method of the WorkScheduler + * The Workscheduler determines if the chunk can be run on an OpenCLDevice + * (and that there are available OpenCLDevice). If this is the case the chunk will be added to the worklist for + * OpenCLDevice's + * otherwise the chunk will be added to the worklist of CPUDevices. + * + * A thread will read the work-list and sends a workpackage to its device. + * + * @see WorkScheduler.schedule method that is called to schedule a chunk + * @see Device.execute method called to execute a chunk + * + * @subsection CPUDevice CPUDevice + * When a CPUDevice gets a WorkPackage the Device will get the inputbuffer that is needed to calculate the chunk. + * Allocation is already done by the ExecutionGroup. + * The outputbuffer of the chunk is being created. + * The OutputOperation of the ExecutionGroup is called to execute the area of the outputbuffer. + * + * @see ExecutionGroup + * @see NodeOperation.executeRegion executes a single chunk of a NodeOperation + * @see CPUDevice.execute + * + * @subsection GPUDevice OpenCLDevice + * + * To be completed! + * @see NodeOperation.executeOpenCLRegion + * @see OpenCLDevice.execute + * + * @section executePixel executing a pixel + * Finally the last step, the node functionality :) - * @page newnode Creating new nodes - */ + * @page newnode Creating new nodes + */ /** * @brief The main method that is used to execute the compositor tree. |