renderXF is a realtime, high performance, software renderer written in c#. Thanks to parallelization, direct memory pointers, bitwise operations, P/Invoke, stack allocation, and many more tricks, renderXF can handle lots of triangles. (350k @ ~6ms vertex processing). It's newer, faster and more powerful C++ variant can be found here (XFDraw).
renderXF has been also designed to be as simple and user friendly as possible. Because of this most of the code is composed of super simple commands, and buffer, shader and framebuffer initialization is also very simple. There is also a Wiki available to help with a quick start.
This software renderer is primarily focused on CAD thanks to its buffer caching capability. The current demo uses STL files. Unfortunately it is currently pixel fill rate limited.
- Fully programmable fragment shader
- Partially programmable vertex shader
- Built in camera transforms
- Hardcoded performance features
- Screenspace shaders
- Direct blit (No bitmaps required)
- GDI+ Interoperability (blit bitmaps onto the drawbuffer)
- Draw and Depth Buffer caching
- Simple Shader Code
- Correct perspective interpolation
- Very simple code (Easily portable into C++)
- Drawing and depth buffer pointers available
First a shader has be declared with its type, and attribute settings.
Shader myShader = new Shader(VertexShader, FragmentShader, GLRenderMode.Line, GLExtraAttributeData.None);
For performance reasons, renderXF has its own built in camera position and rotation transformation system. It can be disabled via the SetOverrideCameraTransform() method. However the XYZ to XY transform systems are not programmable.
unsafe void VertexShader(float* OUT, float* IN, int FaceIndex)
{
OUT[0] = IN[0] * 50f + lightPosition.x;
OUT[1] = IN[1] * 50f + lightPosition.y;
OUT[2] = IN[2] * 50f + lightPosition.z;
}
renderXF doesn't actually require a vertex shader unless manual camera is selected. This is in contrast with gouraud mode which requires only a vertex shader.
The fragment shader gives the user a byte pointer to the RGB pixel it will be setting. and an attribute pointer that will first give the interpolated vertex data, and then the extra attributes.
unsafe void FragmentShader(byte* BGR, float* Attributes, int FaceIndex)
{
BGR[0] = (byte)(127.5f + 127.5f * Attributes[0]); //Blue Color
BGR[1] = (byte)(127.5f + 127.5f * Attributes[1]); //Green Color
BGR[2] = (byte)(127.5f + 127.5f * Attributes[2]); //Red Color
}
Example of screen space shader code:
unsafe void VignettePass(byte* BGR, int posX, int posY)
{
float X = (2f * posX / renderWidth) - 1f;
float Y = (2f * posY / renderHeight) - 1f;
X = 1f - 0.5f * X * X;
Y = X * (1f - 0.5f * Y * Y);
BGR[0] = (byte)(BGR[0] * Y);
BGR[1] = (byte)(BGR[1] * Y);
BGR[2] = (byte)(BGR[2] * Y);
}
In use:
Shader VignetteShader = new Shader(VignettePass);
//Then during each frame, select and execute the shader
GL.SelectShader(VignetteShader);
GL.Pass();
The best case scenario to use this is for CAD applications, where frames are not continuously rendered. The software renderer is fast, but continuously rendering will crank your CPU temps high. The faster C++ version should probably be used for faster performance/less CPU fan noise.
The drawing and depth buffer can both be saved. This can hugely improve performance by just rendering what is moving.
//Initialize renderX and the Cached Buffer
renderX GL = new renderX(1920, 1080, this.Handle);
GLCachedBuffer cachedBuffer = new GLCachedBuffer(GL);
//Tell renderX to create a copy of the drawing and depth buffer upon draw
GL.CreateCopyOnDraw(cachedBuffer);
//Draw the object
GL.Draw();
//During the next frame, just recopy the old buffers:
//Copy with depth test:
GL.CopyFromCache(cachedBuffer, CopyMethod.SplitLoopDepthTest)
//Copy without depth test:
GL.CopyFromCache(cachedBuffer, CopyMethod.SplitLoop)