Your explanations are spot on. We are taking in camera data across 16 "serdes" pairs operating @ 1.08Ghz. TThe camera is transmitting the clocking based from a clock produced from the FPGA @ 90 Mhz. The Camera is handling the workload, but the serdes is the collection system for the data. The data will be intensity checked when it arrives and placed into a buffer for analysis. Each frame will produce a subset of values that will transmitted out . So you can see only a part of the data will be kept and stored in memory, but a "ping pong" memory will be used to work the data out while another frame is collected. We will store frame output data and sream across an ethernet communication link. The data itself per pixel is 10 bit and at the most we will be storing 60000 words or 120000 bytes per frame before sample. Sample output for an entire frame is only 2400 bytes across ethernet.
I appreciate your assistance.