Search the Community
Showing results for tags 'gige'.
Found 3 results
Basically I was wondering if it were possible to connect an Allied Vision GigE camera to my zedboard? I am using the Mako G030B model, and was hoping to be able to process the data to run it through some algorithms. I was intending on just using the ethernet cable from the camera to the FPGA, however if there is a better input please let me know! If there are any additional questions needed please ask!
Hi everyone, this is my very first question on your forum I'm new to the FPGA topic and this week I struggled to evaluate how difficult the following project will be: We have a motion capture system tracking a hand and driving a very complicated levitation device. The project should work with as little delay as possible. At the moment it is about 17 ms and the target is to reduce this to around 7 ms. Most of the latency comes from the GigE connected cameras, sampling at 200 Hz, but also from the operating system. Because of the complex computation, the difficult part is precomputed as a very big lookup table (8 GB) present in memory. To reduce latency, we want to work 'bare metal' and later on , eliminate the lookup table and use high parallelized code to drive 128 devices at 50kHz frequency. What I planned so far: Using the existing cameras would require a low latency system handling image processing (stereo camera registration and key point tracking). I know that these would be efficient to be implemented on an FPGA. To address 4 Optitrack Prime 13 cameras, the NetFPGA-1G-CML Kintex-7 FPGA Development Board looks very promising. Can somebody estimate how difficult it will be to extract images of a GigE Camera with the Vivado Studio will be? The second part is frustating: I do not know how to add DDR3 RAM from a laptop to this setup. Is it possible to add an adapter to the FMC and use the MIG to configure the Interface? I tried to search for this but only found boards with SO-DIMM sockets or RAM -Chips presolderd. The first are far to expensive and the second have not the required capacity. I only used SPI and I2C on a microcontroller so far, therefore interfacing ethernet phy or RAM programmatically and especially physically is still a mystery to me. The third problem is optional: the target device is interfaced via USB 2 and drivers only exist for Windows. It is not easily possible to communicate directly with an FPGA in this scenario is it? In the end, I want to use high level programming, like the Vivado Studio or Simulink. The project is financially limited, but around 2000 Euro would be adequate, my professor told me. I am thankful for all constructive advices, comments. literature and questions. Please tell me your opinion, if an FPGA will be the right choice, the project is manageable and/or if there is a better solution. Best regards from Germany, Matthias Popp
Hi there, i a newbie. I hope i am asking the right questions. I have been trying to figure out what kind of hardware/software is needed for my project. The project is getting the images with a high speed image sensor with a frame rate of 500-1000fps at a lower resolutions such as 160x120 or 480x360 pixels. The frames will be read or saved with FPGA SoC board. I know that I have an option of given interfaces: Option 1: Board level image sensor with FFC Cable (LVDS or MIPI CSI-2 interface) Option 2: Boxed camera with USB3.0 Cable interface I would love to think about the option 2. The Basler acA640750um USB 3.0 camera with ON Semiconductor PYTHON 300 CMOS sensor. This camera has a frame rate of 751 fps at VGA resolution. So if I have a FPGA board with a USB3.0 this option would be okay for plug and go ? I hope someone will lead me and help me. Regards, Sirac