FPGA for 3D rendering / modeling

I'm a seasoned C # /. NET developer (it doesn't really matter, because FPGA looks like another level of complexity). While my level of ability is not as proficient as in C # as I still sometimes look for real (but not very often, although I struggle with some syntactic / advanced concepts), my boss makes FPGA and recommends me to participate (weakening myself, I am surprised that I am not discouraged as I am a junior developer and this is a complex technology).

So my question is, what's the best way to learn FPGA? I collect books, etc.

I am considering scalable 3D modeling and rendering (ideally in a Windows application where the user is waiting for an instant response) and CUDA is popular, but not as fast as my boss said.

Is FPGA the way to go for such a project?

thank

+2


source to share


4 answers


To be honest, I think your boss is wrong. NVIDIA and AMD sell real silicone hardware designed for accelerated 3D rendering. If your particular problem is not one that does not fit the existing shader / CUDA paradigms, then the configurable hardware device cannot compete. This is for the same reason that even the best FPGA-based processors (Xilinx MicroBlaze, Altera Nios) are toys that are comparable even to low-profile embedded ARM cores. (Often useful toys, mind you, but not competitive, with the exception of designs with otherwise unused FPGA gate space.)



But I can definitely recommend learning FPGA and HDL programming. This is one area where “collecting books” doesn't really help you. What you need to do is get a cheap development board (there are many on the market in the $ 100-200US range), download a suitable toolchain, and start writing and testing code.

+7


source


Why not learn how to use the hardware acceleration that comes with modern PCs today? I'd bet that using OpenGL or DirectX (whatever it is called these days) with hardware acceleration would work better.

My guess is that if your application is going to run on some special embedded device, you might want to build your own hardware, but for PC applications it is probably too expensive and has little or no advantage over the software solution that already has insane work done to tune performance.



My opinion: take advantage of all the work that has been implemented in the technology of 3D games.

0


source


As Andy Ross says, I believe FPGA is how you want to go for this type of problem - you will also need to wire it to the PC in some way.

I would start by getting DevKit and play around with it. Make the LED blink - I have always found this to be the hardest part when I start with a new embedded oO device Receive some form of message (RS232 / TCP) which is probably on DevBoard. Then do some math functions that will get the parameters / pass the results back through the comm.

0


source


Well, scalable 3D rendering on fpga. How would you approach him? FPGAs are great for scaling classic simd architecture according to your need (or constraint), with a lot of parallelism you can handle stuff to acceptable levels even from 100MHz, the only limitation in my opinion is memory bandwidth and speed. Remember, you need a graphics controller that can take advantage of the data you spit out. You would essentially make all the hardware such a daunting task, are you sure you can make a SIMD processor capable of 3D rendering? What will your hardware design be?

As many others have pointed out to ITT; Nvidia's CUDA is a great alternative, the new farm architecture seems promising, but if you're looking for inexpensive, low and low power consumption, I don't recommend using CUDA. I'm sure it's great for the task, but if your task has wheels and a battery, it gets complicated.

I think the task more suitable for fpgas than graphics is biological computation, problem space needing more parallelism than graphics.

0


source







All Articles