General Discussions

GPUCHAIN – klumsy



Posts: 1061
From: Port Angeles, WA, USA
Registered: 10-25-2001
just to keep some code talk going on here, here is some info/brainstorming on a project i am working on in regards to the video jockey software i help develop team of: :

this is a paste from a post i made somewhere else:

i'm here to brainstorm, and maybe inspire some other developers to join a lofty cause. The goal:


the main a chainable plugin framework such like Freeframe(or visualjockey, or directShow), which every stage is done within the GPU with multiple inputs and an output (maybe multiple outputs), every stage (plugin) is fully self contained other than the inputs and outputs and general framework.


the name i'm not sure? GPUCHAIN? SHADERFRAME? anybody dare to come up with a catchy and accurate name for it?


GPUFRAME will be abstracted from any hardware implementation, it isn't really a shader framework (Though shaders are part of what can be done, but anything that takes in inputs that are textures inside the graphics card, and outputs (renders to textures) inside the graphic card. so GPUFRAME with be both D3d and GL, plugins will have self describing metadata saying what their requirements are (both GL, and d3d) as well as what level of hardware they require (like pixelshader 1.3, 2.0 3.0?) and if they use any platform specific calls (like winapi calls or such)..
those the framework supports GL adn d3d, only one will be able to be used at a time because YOU CAN'T PASS a GL TEXTURE on teh CARD to D3d or visa versa WITHOUT BRINGING IT BACK TO MAIN MEMORY - which is a costly operation, (one some people may choose to do , especially once PCIexpress is out. I personally will be starting on D3d, and the d3dhelper library, but i will design with openGL in mind and hope that GL delevelopers jump on board as well.

would this be for shaders? yes and no.. one of the biggest users will be pixel shaders, where one or more input textures and transformed by pixelshader code and rendered to an output texture (which is then used by the next plugin in the framework), however certian effects are good as pure pixelshaders, while others are better served by geometry etc. so this will be for SELF CONTAINED GPU EFFECTS which take in one or more inputs (TEXTURES) and output ONE(possiblly more?) TEXTURES.. it can use whatever GPU methods inside.. PIXERLSHADERS, or geometry, vertex shaders or combinations.. for example a dirtion effect is best not produced with pixelshaders (alone), but with a mesh/grid and moving the vertexes (maybe with a vertex shader), however its self contained.. its a 3d scene that it is internal to the plugin/effect.
what version of shaders do we support? i think it will just be the latest, meaing it won't work on anything but the latest generation of hardware.. mostly because pervious generations of pixelshaders weren't really powerful enough other than for simple 3d shading effects, not really for our 'software 2d in GPU effects'

Software inputs
can you combine with software? for INPUTS you can, maybe put a freeframe plugin as the input to a GPUCHAIN plugin, however all outputs remain on the card to be used by the next PLUGIN, or for final rendering to screen.

Technical Design
here is quite brainstorming of features.

1) Library - there will be a library DLL with alot of helper/framework functionality that the plugins can use
2-1) each plugin can specify what its inputs are (how many)
2-2) each plugin can DYNAMICALLY specify whether or not an input
is needed at a particular frame (thus not having to render upchain if not needed.
2-3)each plugin can specify the resolution of a particular input, (either letting the system make the decision, specifying a specific X,Y resolution, specifying an aspect ratio, or specifying a X and y percentage of the output resolution.
3-1) like freeframe there will be boolean and floating point input variables, but as well, integer variables, and maybe even strings? (for text effects)
4-1) there will be an output variable section, where a plugin can register output variables (in a group that it named), and it can set these output variables as it wishes, and the host engine will keep track of these, and show them to its app as it wishes, consuming them as they wish, or allowimng the user to then map these to other plugins input variables.
5-1) often plugins require configuration to use them in a flexible and powerful way.. i vdon't know the best way to do this though, since it limits the application alot, tie into a specific UI model and i don't want to do that.. so probably there won't be any configuration screens, but maybe a some XML or suchlike configuration files, or a some sort of configuration varriables communicated between plugin/engine and host.
6)constructor/destructor to take care of basics
7)initialise method - when the host initialises the plugin, in here to do alot of the setup
8)render method, whether the host wants the plugin to do its business.
9) what about different texture formats? 16,32 bit etc?
10) what kind of shader techniques? using HLSL or shader ASM etc? - well since that is internal to the plugin the author can choose, but the helper library will have alot of helper functions to get you on the way, probably HLSL will be the best..
11) directshow output to inputs for our textures.
12) specific 'transition' plugins..
13) a built in copyright tools/library for commercial plugins to be able to use?
14) GPU ^2 texture resolution restraints.
15) ASYNC/ SYNC issues

this is just brainstorm #2

alot of its design is going to be changed as we progress with proof of concept apps, discover new oppertunities/limitation etc

there is still alot more issues.. but that is a start, others ideas, thoughts are welcome. who is keen on being a part of this?

in many ways this is probably similar to apple's resolution issue[URL= ]CoreVideo[/URL] but hopefully even better.

Visionary Media
the creative submitted to the divine.
Husband of my amazing wife Aleshia