elodg

Digilent Staff
  • Content Count

    113
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by elodg

  1. Hello Andrew, We don't have any such examples. We do test SD and USB in our manufacturing tests, but it does not go as far as knowing anything about files. For both SD and USB Mass-storage you will need file system support. FatFs is a good open-source example. The lower-level drivers are the ones provided by Xilinx and we use them ourselves. For example, reading the first data block on an SD is as simple as: #include "sdps.h" /// Initialize the read buffer memset(arrbyReadBuff, 0, kwBlockSizeBytes); /// Initialize SDIO controller psSdConfig = XSdPs_LookupConfig(XPAR_PS7_SD_0_DEVICE_ID); if (XSdPs_CfgInitialize(&sSdPs, psSdConfig, psSdConfig->BaseAddress) != XST_SUCCESS) { VERBOSE("%s (line %d) error", __func__, __LINE__); FAIL_AND_RETURN; } /// Initialize SD card if (XSdPs_CardInitialize(&sSdPs) != XST_SUCCESS) { VERBOSE("%s (line %d) error", __func__, __LINE__); FAIL_AND_RETURN; } /// Change bus width to 4-bit if (XSdPs_Change_BusWidth(&sSdPs) != XST_SUCCESS) { VERBOSE("%s (line %d) error", __func__, __LINE__); FAIL_AND_RETURN; } /// Issue a read of the first block (boot block) XSdPs_ReadPolled(&sSdPs, 0, 1, arrbyReadBuff);
  2. It is possible that HPD is not implemented as a software interrupt, in which case input video might not re-initialize on-the-fly. Do you program an EDID into the ADV7611? How do you know, you are receiving video from the input? Is it the resolution and color format you are expecting? Are you configuring the output (ADV7511) with the same settings? You need to provide more information, if you want a more helpful answer. Generally, I suggest using logic analyzer to scope signals of interest. You should be able to see where does the video stream get blocked. Reading the status registers of both the encoder and decoder also helps.
  3. You could try disabling the memory controller in hardware and run the test program from OCM to make sure there is no DDR access happening. Compare that to hardware enabled, but expected to be running from cache.
  4. I see. So your measurements refer to the board as a whole. My guess is that you are seeing higher power consumption peaks with cache hits, because the data is available and the processor can do the calculations without stalls. With cache misses, the processor stops until the cache line is fetched from DDR. Reading from DDR should takes a bit more power than from the cache, but power consumption should be evened out with less peaks. Total power integrated over time should be bigger, however. As to sleep, I can only hypothesize that sleep is not working properly. Sleep power should be < idle power.
  5. elodg

    DDR3 Termination

    On the Zybo, a different routing topology has been used (tree vs. fly-by), that allowed for shorter traces and the use of near-series termination, instead of far-parallel.
  6. How and where are you measuring power consumption?
  7. All Digilent FPGA boards have fixed power sequencing. Take a look at Xilinx dev kits, as most (if not all) have a programmable power controller on them, which can be used to change a lot of parameters related to power.
  8. Pretty impressive, hamster. It sound like you dug yourself into the DP specs pretty deep. As you have already realized getting the asynchronous clock domains right is quite the challenge. You could theoretically require synchronous Stream and Link clocks, you generating both and pulling pixels at your own rate, but this would force the use of some sort of frame buffer (like axi_vdma). Asynchronicity would be a nice-to-have, because it doesn't put (as many) restrictions on the upstream IP. A couple of thoughts: If you want to keep your core implementation architecture-agnostic, define simple interfaces where architecture-dependent components can be inserted. I am thinking GT transceiver, FIFO, Clock Generator. Apply the same concept to the video timing detection too. This way we can plug it into an existing VTC pipeline. If you are already thinking about extending both the lane count and the width of the pixel port, make it either very generic or forward looking. Lane count up to four, and up to quad-pixel wide interface would allow for high-bandwidth resolutions. How is your policy maker implemented? A FSM is fine for testing, but using software-based training sequence running on an embedded processor offers greater flexibility to the user. If you keep your IP very generic, we can add Digilent board- and Xilinx-specific infrastructure ourselves and publish it in our vivado-library repo next to our other video-oriented IP. I will give your code a go one of these days.
  9. For 1080p, a bit period is covered by 8-9 taps (78ps per tap). We were getting 5 at least for the open eye in our tests. Our phase alignment algorithm seeks to the center of the eye delimited by tap values not getting the number of tokens and at the frequency listed in the DVI spec. Could it be that checking for bit errors < 1 results in suboptimal lock if the data set is not large enough?
  10. OK, so it's not a DVI-D/HDMI mismatch. It might be a bit error in this case. If you are using your own design, make sure you do phase alignment correctly. Do you have a way to measure how successful phase alignment was and what are your margins? Our demo project uses the dvi2rgb IP from our vivado-library GitHub repo. With our IP you could you try looking at the pEyeSize signal for each channel with the Vivado Logic Analyzer. This should tell you how wide (in number of IDELAY taps) the open eye was detected during the phase alignment step of the lock process.
  11. Hello Hamster, If there are blocks of CTL tokens where there shouldn't be or non-tokens where there should be, most probably you are looking at an HDMI stream, rather than DVI. HDMI builds upon the DVI spec, but introduces out-of-band transmission (audio, for example) in the blanking periods. Therefore, HDMI Sources send less CTL tokens and mixes them with other data. This obviously only works if the Sink too is HDMI-compatible and not just a DVI Sink. The pass-through mode in the Nexys Video demo is not passive. The Sink port actually decodes the video stream in the FPGA, re-encodes and transmits it on the HDMI Source port. The Nexys Video has to emulate a Sink (like a monitor) for the connected Source to begin transmission. This includes the handshake mechanism of hot-plug detection and EDID read. All Sinks have to provide identification and capabilities information in the EDID memory over the DDC bus. The demo project only supports DVI and should correctly advertise this in the EDID. The connected Source has to read this and adjust resolution and other parameters so that the Sink is compatible with it. Make sure the Sencore Pro Multimedia Generator does this and there is no manual override set on it. This is not a limitation of the board itself, but that of the FPGA implementation. You should be able to implement HDMI in the FPGA without problems. Let us know, if this helps.
  12. elodg

    Gigabit Ethernet

    Did you look in the Ethernet demo project for the Genesys to see how the clocks are connected and constrained there? http://digilentinc.com/Data/Products/GENESYS/Genesys_Lwipdemo.zip The TEMAC Wrapper Getting Started Guide too might be useful: http://www.xilinx.com/support/documentation/ip_documentation/v5_emac/v1_8/v5_emac_gsg340.pdf Appendices B and C talk about the different clocking configurations for certain interface/speed combinations and how to constrain them. Are you trying to achieve 1000Mbps speeds over the GMII interface?
  13. elodg

    Gigabit Ethernet

    Could you point me to the exact example you are talking about? Also, posting specific questions rather than "manage the constraints" will increase your chances that someone will pick up your question. Is there an error you are getting, or perhaps a step you are stuck at?