One of my projects is to make an HDMI output core driven by an AXI stream (rather than act as an AXI memory master). Since the video output is run by the pixel clock, it is in a different clock domain than the rest of the system. I understand that to cross the domain correctly I'd need a FIFO, and it seems that most async FIFO designs utilize gray codes to synchronize the read and write pointers, restricting the potential metastability bit error to one.
Extrapolating this out, surely the number of potential bit errors increases when the frequencies are >2:1 (or some other multiple)?
For example, a high frequency sender could write 4 items into the queue, legitimately changing multiple bits of the gray code write pointer. The receiver could then desynchronize the gray code write pointer and miss some MSB transitions, while some LSBs are updated giving a very incorrect write pointer.
Maybe since the transition frequency of a bit drops off logarithmically with its position, errors in the MSB are much less likely to occur?
Still quite new to FPGAs, so forgive me if I've gotten the wrong end of the stick..
Question
RedMercury
Hi all,
One of my projects is to make an HDMI output core driven by an AXI stream (rather than act as an AXI memory master). Since the video output is run by the pixel clock, it is in a different clock domain than the rest of the system. I understand that to cross the domain correctly I'd need a FIFO, and it seems that most async FIFO designs utilize gray codes to synchronize the read and write pointers, restricting the potential metastability bit error to one.
Extrapolating this out, surely the number of potential bit errors increases when the frequencies are >2:1 (or some other multiple)?
For example, a high frequency sender could write 4 items into the queue, legitimately changing multiple bits of the gray code write pointer. The receiver could then desynchronize the gray code write pointer and miss some MSB transitions, while some LSBs are updated giving a very incorrect write pointer.
Maybe since the transition frequency of a bit drops off logarithmically with its position, errors in the MSB are much less likely to occur?
Still quite new to FPGAs, so forgive me if I've gotten the wrong end of the stick..
Thanks!
Link to comment
Share on other sites
8 answers to this question
Recommended Posts
Archived
This topic is now archived and is closed to further replies.