hamster

Members
  • Content Count

    515
  • Joined

  • Last visited

  • Days Won

    79

Everything posted by hamster

  1. You will find a good explanation here: https://reference.digilentinc.com/nexys_vga/refmanual There is a list of timing modes at http://hamsterworks.co.nz/mediawiki/index.php/VGA_timings - hope these help. If you look around the internet you will find lots of simple VGA projects - including this one: http://www.hamsterworks.co.nz/mediawiki/index.php/Papilio_Plus/Hello_World
  2. If the video is coming from memory, using the faster pixel clocks isn't a problem just add a black border (80 pixels left and right, 60 pixels top and bottom) would be simple. If they are standards compliant all HDMI devices (both sources and sinks) "should" support 640x480 (24.something MHz pixel clock). Odds are that the PLL/DCM used by RGB2DVI to generate the clocks for the SERDES blocks isn't configured to lock onto really slow pixel clock rates below 50MHz. I've never used RGV2DVI, but if the source is included then checking out the clocking and doubling the Multiply/Divide values to keep the VCO frequency in the correct range should be all that is needed. If this does work, then just put your Y values to all three channels ( R, G & B ), and you should have a 8-bit greyscale image, without any mucking around with HDMI.
  3. The math will work (as you can expect): gray = (r * 76 + g * 150 + b * 29 + 128) >> 8; I can't see any reason that it couldn't be integrated in the HDMI-in to VGA out project, I have never looked at that project, but I suspect it will need to be implemented in a way that will fit nicely (be that an IP block, a HLS module or a VHDL or Verilog module).
  4. Hi, The "HDMI" on Fpga4fun is actually DVI-D, which is electrically compatible to HDMI but doesn't support any of the useful features like sound, colour spaces or 422 pixel formats. If you have the option, it will be quicker to use an IP block, however, if you want to implement it at a low level:. 1. A guard band added to the start of the video data - this consists of a special sequence of bytes. This is easy. 2. The HDMI "AVI infoframe" data packet.- this describes the video format as it goes over the wire, and it is in this packet that the HDMI source tells the HDMI sink if the stream is RGB444 or YCC4444 or YCC422. Building the inforpacket is quite hard - it uses a separate coding scheme layered over the low level TMDS signalling, and has ECC and if this is wrong then the HDMI sink won't be able to decode the video stream correctly. An absolute minimal HDMI output design (with only support for 3-bit colour have a look at http://hamsterworks.co.nz/mediawiki/index.php/Minimal_HDMI - but that design is not a good way to approach the your problem. You will want a proper TMDS encoding layer that supports the full range of data values to generate 24-bit images (or for YCC it will actually be 12-bits for each component).
  5. As a side note, the normal format for DVID is RGB 444 (24 bits per pixel). You get 8 bits of each component each pixel clock. If the FPGA board advertises that it supports HDMI and that it supports YCC 422, you will get: 12 bits of Y0, and 12 bits of Cb on the first pixel clock (channel 1 carries the lower four bits of Y0 and Cr) 12 bits of Y1, and 12 bits of Cr on the second pixel clock (channel 1 carries the lower four bits of Y1 and Cr) If you want to get higher greyscale depth than 12 bits you will need to use a deep color mode. In the display port 1.4 spec deep colour modes supported are 24-, 30-, 36- and 48-bits per pixel, and only RGB 444 and for YCbCr 444 are supported. Out of all the options, only 48-bits per pixel YCbCr gives you a 16-bit Y value, but it also doubles your data rate, making it not really usable with most low end FPGA dev boards - with generic I/O SERDES rated to about 1.2Gb/s you only use up to a 60MHz pixel clocks at 48 bits per pixel - not even enough for 720p images. So as an upshot, if the best bet might just be to stick with RGB 444 on the interface using the existing solutions, and then do the transform into YCC within the FPGA design...
  6. hamster

    FPGA audio - ADC and DAC

    If clocking is your problem, with a MMCME2_BASE with a 100MHz clock you can get to: 100,000,000Hz * 7.25 / 59.0 = 12,288,135 Hz Out by about 11 parts per million - The 100MHz oscillator on the Arty A7 (ASEM1-1 0 0.0 0 0MHZ-LC-T) is +/-50 ppm, so maybe nobody will really notice.
  7. hamster

    FPGA audio - ADC and DAC

    Although it is VHDL, and mostly around configuration of the CODEC, this project takes line in and sends it out line out on a STGL5000 CODEC that I got off of Tindie that is for the Raspberry Pi. http://hamsterworks.co.nz/mediawiki/index.php/STGL5000 It uses the clock provided by the codec board.
  8. There are a couple of ideas used to divide digital logic designs into different classes. Combinatorial vs Sequential Combinatorial logic is pretty much any logic with outputs that should responds 'instantly' to changes in the input, and where the output is a pure function of the present input only (i.e. has no internal state) vs Sequential logic is logic with output depends not only on the present value of its input signals but also the sequence of past inputs (i.e. it has internal state). Most FPGA designs (if not all!) fall into this category. Synchronous vs Asynchronous Synchronous logic updates on the edge of a clock, which provides the timing information. Most FPGA designs fall into this category. Asynchronous logic updates the outputs as the input changes - the changes of the input signals provide the timing information. These designs use things like latches and subtle timing behavior to function properly, and are not usually compatible with FPGAs. How this relates to VHDL concurrent statements vs processes VHDL concurrent statements generally describing asynchronous logic, and processes are usually describing synchronous logic - but not this is not always true! Complex async logic can be best expressed in processes,as you can use "if" and "case" statements - you can tell these because of the long sensitivity lists, rather than being sensitive to just a single clock signal. And with statements like "Q <= D when rising_edge(clk);" you can make a concurrent statement implement synchronous logic, and squirrel them away where others are not expecting to see them!
  9. You might be using "FSM" when you mean "combinatorial logic". :-)
  10. This is the warning that matters: "signal 'up_test' is read in the process but is not in the sensitivity list" What has happened is the tools have detected that you read the value of "up_test" in the process, assumes that you have made an error, and helpfully added it to the process's sensitivity list. This is because for any async logic in processes all signals that are used in the process should be included in the sensitivity list. Were "up_test" included in the sensitivity list you would get exactly the observed behavior (although now any simulations would hang/break... sigh). If this is the right thing for the tools to do is an open question - sometimes it is, sometimes it isn't. Throwing an error rather than a warning would break existing code bases... issuing warnings lets these sorts of issues slip through. By "you haven't defined is some way to filter out the changes of BTN", Ignore me - I was just waffling on to allude that you need to use "rising_edge()" somewhere to control the timing of when "up_test" gets updated.
  11. As a side issue, I recently did something that had similar data requirements - capturing 2-bit samples at 16,386MHz (32Mb/s) for an extended period of time (many minutes) I used an older NEXYS2 board because it has the Digilent EPP interface, that can stream at well over this speed. - A simple HDL design captured the data, wrote it into a FIFO with a sequence counter bit so I could detect dropped data - Some more HDL read the FIFO, and sent the data up the EPP interface to the host PC. On the PC it then: - Allocated a large buffer (a couple of GB IIRC) - Cleared the buffer to force it into memory, so it woudn't stall - Streamed in the data from the EPP interface - Once data was in buffer on the PC it was written to the slow disk SATA I had to buffer it in memory, because a small delay on the PC end would cause the FIFO on the FPGA board to overrun. I could have most likely used a second thread to write the data in real time if I needed to. It ended up being a very small simple design when compared to what you are contemplating. Have a look at https://github.com/hamsternz/Full_Stack_GPS_Receiver/tree/master/misc
  12. I have a differing view from @Zygot's You are right - the counter shouldn't be running continuously, If you follow the language spec to the letter, your counter should change any time the state of BTN changes. That is what you should get in simulation. However, what you haven't defined is some way to filter out the changes of BTN, such that the counter only incremented with some of the possible state changes (.e.g a "rising_edge()" clause). When the synthesis tools try to convert your designs into hardware they are looking for either clocks or async resets or clocks to act as triggers for changes in state. The design has neither of these - it can't see that BTN should act as a dual-edged clock, so along with some warnings (I hope it gives warnings, heck, the tools give warnings for everything!) it just gives you an adder, with nothing controlling the timing of when up_test is updated. The synthesis guys will are that this is what you want and this is what you asked for, and they will also argue that the sensitivity list is just a hint for simulation.... What you really want is something like: btn_clk_proc : process (btn) begin if rising_edge(btn) then test <= test+1; end if; end process; ... but then that leads onto discussions of switch bounce, routing delays and metastabiliy.
  13. It seems easy on the surface of it, but - The different channels have different guard band values - You need to work out what blanking and sync signals you need to assert during the guard bands and data packet guard bands - The HDMI pixel data might be in "Studio level" (16-239) rather than full range (0-255) - The video data might be in YCrCb 444 rather than RGB 444 format. (The RGB 444 means that for each four pixels there are 4 R values, 4 G values and 4 B values) - The video data might be in YCrCb 422, where you have 12-bit values, and you get four Y (brghtness) but only two Cr or Cb values per for pixels. (which is more like TV video formats) I made an attempt at decoding this, and it seems to work ok. You might be able to find any hints you need at my project: https://github.com/hamsternz/Artix-7-HDMI-processing/blob/master/src/hdmi_input.vhd but the source for all knowledge is the HDMI specification - find it by searching for "hdmi specification 1.2a filetype:pdf" or "hdmi specification 1.4 filetype:pdf" - e.g. http://read.pudn.com/downloads72/doc/261979/HDMI_Specification_1.2a.pdf
  14. It is most likely the souce defaults to HDMI mode if it can not get the EDID data from the FPGA when it detects a cable hot-plug...
  15. Those extra blocks looks like HDMI data islands. The shouldn't be there on a true DVI signal. If this is so, the active lines will also have a two-byte guard band at the start too. (So 1920 pixels will be 1922 bytes with a constant two byte preamble). I and not sure that DVI2RGB is ddesigned to receive and filter RGB - guessing not from this trace.
  16. Hi, Once you get the feel for how FPGAs work, have a good look at the Vivado Synthesis Guide ( https://www.xilinx.com/support/documentation/sw_manuals/xilinx2017_1/ug901-vivado-synthesis.pdf ). It has lots of design patterns for how to describe the sorts of structures you might want to use -without reaching for the library of primitives. It covers things like memories, DSP blocks, shift registers, fifos and so on - without using the IP generator. It can be a great technique to use, as: - it does not tie your logic designs to a vendor's IP libraries & licensing - it allowing others to use your designs on different FPGA tools - is more flexible than using IP blocks - you don't need to regenerate IP blocks to make small changes - Easier to integrate into source control systems like Git, compared with IP blocks You might need to flick between other user guides (such as https://www.xilinx.com/support/documentation/user_guides/ug473_7Series_Memory_Resources.pdf for memories), but I am ure you will find it interesting.
  17. The is from memory, and I am on a phone, so don't be surprised if it is wrong... The PLL and the MMCM blocks have different max/min clock frequencies. I am pretty sure you will need to use the MMCM. CAn you check the datasheet and add an update to this post? "Artix 7 switching characteristics" should find it the correct PDF file quickly.
  18. Hi, A bit of background.... The transmitter uses the EDID ROM in the receiver (eg monitor or FPGA board) to see what formats the device can receive. If it identifies as DVI-D, then the source can only support picture data. IIfit identifies as HDMI then it can support pictures and audio. Here is what I think is going on. If you plug in the audio extractor by itself (or without the design running in the FPGA, the audio extractor says it supports HDMI, so audio is sent. When the FPGA design is loaded the splitter passes through the capabilities of the FPGA design in gn, which only supports DVI. So you get picture but no audio. The solution would be to have a full HDMI receiver in the FPGA (and then you might not require the audio splitter anyway.
  19. Last night I measured the speed of RF waves in a generic 10m TV coax using the AD2, a socket and two resistors Why?: I'm trying to build a cheap colinear antenna for receiving 1090MHz plane broadcasts. To do this I need to know the "velocity factor" of a cable. The setup: Connect the AD2 wave output and the input of the first scope channel (the reference channel) to one end of a 330 ohm resistor, Connect the other end of the 330 Ohm resistor, the second scope channel, and one end of the 100 Ohm resistor to the center pin on the socket. Connect the other end of the 100 Ohm resistor plus the AD2's ground connection to the shell/ground connection of the socket. Running the test: Without the cable plugged into the connector, run the Network Analyzer, from 1 MHz to 10 Mhz - it should be a pretty much flat line. Then connect the cable and test again. There will be a 'dip' somewhere 5 or 6 MHz. What is going on: The 330Ohm+100Ohm resistor acts as a signal divider, and has an AC impedance of about 75 ohm, matching that of the Coax cable. Because the cable has an open end, it is acting as an 'open stub' and any signal that is injected into the cable reaches the end of the cable and is reflected. The source and reflected signal interfere with each other, and where the signal is destructively interferes with the source signal the "dip" is seen. The bottom of this dip is when the cable is 1/4th the wavelength of the RF signal - so if the driving signal is at 90 degrees, the reflection is at 270 degrees, making the measured signal much weaker. Results: For a 10m (30 ft?) cable the dip was at 5.634MHz. That makes a full wavelength 40m long. That gives a speed of propagation of 5.634MHz * 40m = 225,360,000 m/second - about 75% the speed of light in a vacuum.
  20. The "sensitivity list" is the list of signals that will cause the body of the process to be evaluated when their values change. It can be also thought of the list of all asynchronous inputs and any clock signals used in the process. For hardware, usually the synthesis tool will generate the correct hardware anyway, regardless of any errors or omissions. For simulation they need to have all the required signals, as otherwise they body of the process won't be evaluated when it should be. For clocked processes, it should usually just be the clock signal and any async reset signal (which shouldn't really be used in FPGAs anyway!) For unclocked processes (asynchronous logic), it should be any signal that is used within the process. A template for a clocked process is this: process_name: process(clk, async_reset) begin if async_reset = '1' then x <= whatever_the_reset_state_is; elsif rising_edge(clk) then -- all your usual stuff if a = '1' then x <= b AND c; end if; end if; end process; In this case, if a = '1', b ='1' and then c changes the output x doesn't change - it only changes when the clk rises or if reset is asserted, which is why a,b and c are not needed on the sensitivity list. A template for an unclocked ("async") process is this: process_name: process(a,b,c) begin if a = '1' then x <= b AND c; end if; end process; if a = '1' and b = 'a' the process needs to be evaluated every time 'c' changes, so 'c' has to be on the sensitivity list. Likewise the output might change if a or b changes value, depending on the values of the other inputs, so they have to be there too. Having additional signals that aren't needed to the sensitivity list doesn't break anything, but it can waste time in simulation as the process triggers but nothing changes. You can do other things like "wait until rising_edge(clk);", which avoids the "if rising_edge() then" and nesting, but that is not "the done thing", and considered being a bit of a smarty-pants.
  21. I know I am completely off topic, and being no help at all, but.... IMO an FSM is the wrong tool for the job, and that why it seems odd. This is a pretty standard "build an FSM" assignment, but it involves creating a state machine where none is really needed. All you need to know is the state of the input during the previous clock cycle. if rising_edge(clk) then if my_signal_last = '0' and my_signal = '1' then my_signal_rising_edge <= '1'; else my_signal_rising_edge <= '0'; end if; my_signal_last <= my_signal; end if; or -- concurrently detect the rising edge of my_signal. -- It will be asserted when my_signal transitions from -- zero to one. -- -- You better make sure that my_signal is synchronised -- to the clock for this to work correctly! -- my_signal_rising_edge <= my_signal and not my_signal_last; process(clk) begin if rising_edge(clk) then my_signal_last <= my_signal; end if; end if; And as for "dividing the clock a few times", I shudder a little. Far better to keep everything in one clock domain, signal clock_enable_shift_register := std_logic_vector(7 downto 0) := "10000000"; generate_clock_enable_process: process(clk) begin if rising_edge(clk) then clock_enable <= clock_enable_shift_register(0); clock_enable_shift_register <=clock_enable_shift_register(0) & clock_enable_shift_register(clock_enable_shift_register'high downto 1); end if; end process; do_stuff_slowly_process: process(clk) begin if rising_edge(clk) then if clock_enable = '1' then ... do stuff once in a while end if end if; end process;
  22. Most of the FPGA pins can be used for multiple things - inputs, outputs, global clocks, configuration, XADC, DDR memory.... as long as the pin can do what you want it to do you can the you are OK to ignore the other features (as long as you don't need to use the other feature!) This is one of the differences between Xilinx and Altera FPGAs - Altera FPGAs usually have input-only pins, and high speed/low speed banks, making pin planning much more important for Altera.
  23. Hey there! For the standard I/O pins on 7-series FPGA are only rated to 1250 Mb/s or 950Gb/s depending on speed grade, so pixel clocks > 125MHz are very much out of spec. Also the ability to tune the capture phase (using IDELAY2 primitive) does not have enough resolution align clock edges at 148.6MHz pixel clocks, so it is somewhat hit and miss. However, If you use the high speed transceivers for HDMI, anything is possible :-)
  24. This might be of interest to you https://www.xilinx.com/video/hardware/using-the-non-project-batch-flow.html (regarding none-GUI builds) XDC files are a mystery, but here goes. - Both lines are comments - they begin with hashs - The second line is the true 'human readable' comment - the "Sch=" is the net naming in the schematic. . The IO_L8N... is the pin name on the FPGA package, It is in IO bank 35, it is the negative connection for LVDS pair 8 ("L8N"), and looks to also be the Negative input for XADC pair 14 ("AD14N"). Given the context it is pin M3 on the package being used. So if you removed the first '#' on the first line, this is what is going on. It applies the following settings: PACKAGE_PIN attribute is set to M3 IOSTANDARD attribute is set to LVCMOS33 to the list of external connection that match those found by "get_ports" looking for things that match "pio[01]".
  25. It is a Micro-B socket based on this picture - https://en.wikipedia.org/wiki/USB#/media/File:USB_konektory.png