Just as when the fledgling 2-d graphics in the 1980s went from vector (Battlezone style) to raster displays, the limiting factor is how simple and quickly you can "rasterize" the vector representation.
So how simple or complex would it be to take the "vector" (or graphics primitives) of OpenSCAD style code and rasterize that into a 3-D voxel volume?
Maybe "slice" it somehow using the graphics card's 2-d rendering hardware and reading the pixels out of the graphics memory for each slice?
It would definitely have to be 64-bit native if you are working with 3d "frame buffers" that hold giga-voxels for the most basic MakerBot style (100mm=0.1mm)^3 build volumes.
If someone took time to write the lowest levels in native assembly using the Intel/AMD multiple data instructions maybe it could be done well in the CPU. I love the idea of using the video card's hundreds of little processors, but I think that type of computing has a long way to go to get to where it is universal.
On Mon, Jul 25, 2011 at 3:59 AM, Brent Crosby <[hidden email]> wrote:
> I love the idea of using the video card's hundreds of little processors, but
> I think that type of computing has a long way to go to get to where it is