D@n

Members
  • Content Count

    2008
  • Joined

  • Last visited

  • Days Won

    141

Everything posted by D@n

  1. @Rockytop, I know @jpeyron helps customers with this sort of design methodology. (I don't normally use it. Mentioning him like that will help bring this to his attention ...) Dan
  2. Does the "valid" path have spaces in it? As I recall, Vivado had a problem with spaces in board paths. Personally, "AXI-GPIO" is not where I'd recommend you begin. I'd recommend you begin here, but that's another story entirely. The problem with starting at AXI GPIO is ... there's so much background you don't have in order to debug things when you start there. To provide a perspective, you might be curious to know that ... I've never used a "board file". Dan
  3. @Rockytop, Let me echo @xc6lx45's comment that if you can't pinpoint the bug, then your design methodology is wrong. Indeed, not knowing what the bug is within your design is a common FPGA problem, and there are several tools to keep it from happening. The most obvious tool is "deskchecking". When you desk check your code, you read it over again to yourself to see what's wrong with it. If you can find a bug, cool. The problem with desk checking, though, is that you can't always find the problem when doing so. The next tool is "blinky", this is the one @xc6lx45 has recommended. It's a great beginner tool, and will often help you find a lot of those early bugs. It's so powerful, that the first thing I do when I bring up a new board is to get an LED to blink. You can then use that LED and whatever pattern it blinks to find bugs in your design. The problem with "blinky" is that you only get a single bit of information, and that single bit needs to be slowed down to the point where you can recognize it. It doesn't scale well. The easiest way to debug a design is by using the simulator. Sure, it's slow. Sure, it takes a bit to set up. But a trace from a simulator will give you access to every single wire within your design over the course of a simulation run. If you've come from the software world, you may find that (in many ways) it's better than a debugger. That said, traces can get rather long, and finding bugs and tracing them back to their source with a trace visualization tool can be a real brain twister. The biggest downside to the simulator is that it doesn't necessarily simulate any hardware your design might be connected to. Your simulation will only be as good as your simulation of the hardware you are working with. Still, any time I can get a bug to appear in the simulator, I consider it a success. A painful success, but a success nonetheless. Only a bit harder than the simulator are internal logic analyzers (ILA). These are very powerful debugging tools that you should also learn to use early on. An Internal Logic Analyzer of a soft-core that you add to your design--i.e. you pay for it in terms of fabric on your chip that cannot be used for your user design. Because your design has to move over to make room for the ILA, you can't record everything. When using an ILA, it's important to pick the right signals to capture, the right trigger to start the capture, and the right size of the capture. Also, because the ILA is added to your design, if you don't pick the right signals or the right trigger etc., you'll have to rebuild the design to try again. These are the traditional tools of the trade in the FPGA world. Using these tools, you should be able to hone in quickly on whatever bug you might have to see why you are having the bug. There's also a newer tool available to the FPGA design engineer, formal verification. There's an open formal verification tool called SymbiYosys that can formally verify Verilog designs. There's also a version that does both SystemVerilog and VHDL, it's just not free. Formal is similar to simulation in that it's a problem you can run on your desktop, and it will give you access to every register within your design. It's different in that, unlike simulation, it doesn't examine a single trace through your design. Rather, it examines every possible set of inputs to your design in a breadth first search for a bug. As a result, it finds more bugs than simulation does. Indeed, simulation can be shown to "mask" bugs, whereas formal tends to find them anyway. The problem with formal tools is their great strength: because they work on a breadth first search, they have a (rough) exponential complexity. This limits the size of the design you can apply them to, and the length of time you can apply the tool for. Even with that limitation, however, formal is known for finding all kinds of bugs simulation misses. You can read more about formal verification on my blog, zipcpu.com. You can also try out my tutorial--it goes through teaching simulation and formal verification from square one (this is a wire), all the way through serial ports. Hope this helps! Dan
  4. @saif91, I like your picture. You've picked a wonderful choice for many reasons. From your comments above, though, it sounds like you are stuck in FPGA Hell. Your biggest problem is not that you don't know where your bug is though, your biggest problem is in your design process. Desk-checking like this is really the wrong way to debug a design. Why so? Because with desk checking alone, you aren't guaranteed to ever find your bug. This was one of the reasons why I wrote my beginner's tutorial--to introduce a beginner to a better design process from the very first lesson. That said, I found the first bug in your design. This is what's known as a "Logic clock". You should never create "logic clocks" in your design. 1) the tools don't know how to handle them, 2) such clocks rarely get promoted to the global clocking network where they belong, 3) leading to subtle and uncontrolled timing violations within the design that the tools may (or may not) detect, and 4) it tends to hide/obscure clock-domain crossing issues. (Have you seen my list of "rules for beginning designers"? PROCESS (clk) BEGIN IF (clk'EVENT AND clk='1') THEN pixel_clk <= NOT pixel_clk; END IF; -- ... You really need to get this to work in a simulation first. Debugging is just *so* much easier in simulation. (It's even easier with formal methods ....) Dan
  5. @chaitusvk, You might find the GPS integration to be a bit of a challenge. It's doable, but it will be a bit of a challenge. I was able to synchronize my Arty A7 board to within about 250ns or so of true GPS time. as reported by the PMod GPS. Suppose we try this: Generate a 32-bit counter (could be more bits, could be less), and increment it so that it rolls over 1000 times a second. (Don't count from 0 to any number other than 2^(N-1)--it'll make the next step harder) Also, generate a second signal that is one every time the clock is about to roll over. (Might be easier to delay the counter by one, and then use: { o_stb, counter } <= counter + increment; together with o_counter <= counter) Use the top N bits of the counter in your counter to generate a sine wave via either a cordic or a table lookup. If using a cordic, use the o_stb value from earlier to enable a logic circuit that selects between two magnitude values, using them as input to the CORDIC If you are just using a table lookup, then multiply the output of the lookup by the same magnitude value That should get you pretty close, no? Now, if I wanted GPS synchronization, I'd start with a core I'd written for that purpose. This core keeps track of the timing since the top of a GPS second in a 2^32 bit counter. If you multiply that counter by 1000 rather than adding the increment to it as we did above, then you should have a 1kHz tone synchronized to GPS. Dan
  6. @TomF, Ok, so ... the flash works, the board works, but the SDK doesn't? This sounds like one of the SDK's files is still stuck thinking you are working with the old design, and that it hasn't gotten updated at all. Hence, I'd return to @jpeyron's advice and recommend that you rebuild the project, to clean out any undocumented values contained in any Vivado/SDK generated files. Dan
  7. @chcollin, If it helps at all, when I implemented HDMI on my own, I built an I2C core to handle the EDID. Yeah, I still used a ZipCPU, but mostly to copy the downstream EDID info the the source port, so that the EDID info I produced would match that of the downstream monitor. To be a bit clearer, my setup was: RPi -> Nexys Video board -> Monitor, and I wanted the RPi to be able to read the monitors EDID values. Both EDID components are posted on line, and you can find them as part of my project here. Dan
  8. @chcollin, I'm curious as to why a MicroBlaze was used for the EDID I2C connection. That seems like a rather heavy-hitting solution for a really simple interface. Are they doing more than just implementing the interface? Adjusting HDMI timing (pixel clock, screen size, etc.) for example? Or does the design use the same settings regardless? Just curious, Dan
  9. @TomF, One other quick question: If you load your design in SPI x1 mode rather than SPI x4 (QSPI) mode, does anything change? The Spansion flash chip has an internal bit within it that needs to be set before it will enable QSPI mode, so this might help. Others on Xilinx's forum(s) have struggled with missing this as well. Dan
  10. @TomF, Ok, if I've got this right, the MicroBlaze CPU is trying to start, it makes a request of some peripheral to get its first instruction, and it is unable to do so? That sounds (again) like a flash issue to me. Remember, the flash device changed. Lots of configuration needed to change with it. The older Micron flash requires a more complicated setup to get into. There's a different number of dummy cycles following the address and before the data. The flash doesn't start up from power up the same, neither does it reset the same as the older flash did. Have you built anything into your design that you can use to query the board and see what's going on? It's typically the first thing I build into any of my designs ... If you want to try some of my own methods here, we could take this off-line and work through it. Dan
  11. @TomF, A couple more quick things to try: have you double checked the jumper settings? Have you deleted your project directory, and then rebuilt it for the new board? Otherwise, we might need to go back to the beginning to verify that you can load something (anything) onto the board: Does blinky work? Dan
  12. @chaitusvk, I notice that the on and off periods are ... not on an obvious integer spacing. Is there some greatest common sublength that all of the bits are described with? Is the carrier a multiple of this subwidth? Your goal is to create this signal, right? I think my goal would follow @xc6lx45's approach: create some form of NCO for the carrier, and then use multiples of it (if possible) for handling bit periods. You might find this article, or even this one, valuable back-reading on the topic. Dan
  13. The carrier phase alignment question is pretty important. What carrier frequency is this thing supposed to be operating at? And ... is it in the range of what an FPGA can create? Dan
  14. @TomF, The big difference I know between Arty A7 rev's ( @jpeyron might correct me here) are the flash devices. (There are also different Artix-7 chips now too, so check which one you have.) Does your design do anything with the flash? Have you checked its configuration? Dan
  15. @riste.karashabanov, Will you be reading an image and then outputting it, or making your adjustments to an incoming HDMI feed? Dan
  16. @wfjmueller, The 200MHz clock is required to drive the IDELAYCTRL element. Every design having a IDELAYE2 element within it must have at least one of the IDELAYCTRL elements within the design instantiated. Dan
  17. @tuhin, Sounds like you are stuck in FPGA Hell. The easy way out would be to simulate your algorithm and make sure that it works in simulation. Dan
  18. @saif91, In general, it's bad practice and frowned upon to ask a question about a product in their competitors forum. Hex file generation is pretty easy. You can see a discussion of how to do it in lesson 8 of my tutorial, the lesson on memory. You may struggle to build a big enough memory on-chip to hold such an image, though. If you are writing NiOS code, you might find it easier to create a C-array containing the image into your C code. Otherwise, you might wish to consider writing it to your flash yourself, and then copying it to whatever video memory you might have available to you: SDRAM, DDR3 SDRAM, SRAM, etc. Dan
  19. @tuhin, Much as I hate to pour cold water on a good party, are you sure this is the right approach to debugging your design? I've found simulation much easier to work with for this purpose. 1) All the internal registers and values are known, 2) It's a controlled environment, and 3) the equipment is easier to use and set up. Dan
  20. D@n

    FAT32 with Zybo Z7

    @sgandhi, Welcome to data processing. The sad reality is that text files aren't good for this kind of thing. It's not an FPGA particular thing, but rather a basic reality. 1) Text files tend to take up too much space, and 2) they require processing to get the data into a format usable by an algorithm. One way to solve this problem, which I've done in the past with great success, is to rearrange the file so that's it's a binary file containing a large homogeneous area of elements all of the same type. In my case, I wanted a file that could be easily ingested (or produced) by MATLAB. I chose a binary format that had a header, followed by an NxM dimensional matrix of all single-precision floats. (You can choose whatever base type you want, but single-precision floats were useful for my application.) The header started with three fields: 1) First, there was a short marker to declare that this was the file type. I used 4-capital text letters for this marker. That was then followed by the 2) number of columns in the table, and then 3) the offset to the start of the table. This allowed me to place information about the data in further header fields, while still allowing the processor to skip directly from the beginning header to the data in question. Further, because the data was all of the same type, I could just about copy it directly into memory without doing any transformations, and to then operate on it there. It did help that the data was produced on a system with the same endianness as the system it was read from ... Dan
  21. D@n

    FAT32 with Zybo Z7

    @sgandhi, Have you tried attaching some form of ILA or scope to the SD CMD wire and seeing what's taking place there? Every part of the interaction should be visible by just snooping that wire and its associated clock wire. Dan
  22. D@n

    FPGA

    @gummadi Teja, An FPGA is a very different structure from either a microprocessor or a microcontroller. An FPGA is like programmable hardware. A microprocessor is a CPU. An FPGA can become a CPU (albeit not a very fast one), but a CPU can never become an FPGA. I found this video valuable when trying to describe what an FPGA is. Dan
  23. D@n

    GPS Pmod

    @cepwin, There's a real easy way to debug whether or not you are getting a fix or not. Remove the FPGA design, and replace it with a pass through from the GPS UART transmit pin to the FT2232 UART RX transmit pin. You can then use your favorite terminal program, mine is minicom some like teraterm, to examine the NEMA stream produced by the GPS device. It's typically 9600 Baud, 8 data bits, no parity and one stop bit. It's also pseudo-human readable--line upon line of CSVs--you should then be able to tell if you are getting lock or not. I see no reason why you wouldn't be getting lock from your upstairs bedroom. Also, for your security, you probably don't want to paste the NEMA stream coming out of the device here for discussion--since it may well reveal the coordinates of your bedroom. Dan
  24. D@n

    Custom IP

    @PoojaN, You're not the first person who has asked this. If you just want to blink an LED, then I'd recommend a different approach that avoids all the pain with AXI in the first place. (You don't need AXI ...) If you want to start interacting with AXI cores, then you'll need to learn AXI. Sadly, this isn't as simple as it sounds. Xilinx picked the AXI bus to connect all their components with. This may have something to do with their ARM integration, since if I understand correctly AXI is an ARM creation AXI is not a simple bus to work with. Unlike Wishbone, it has five channels associated with it each of which can stall. These are the read address channel, the write address channel, the write data channel, the read response channel and the write response channel. One bus failure, and your device will lock up. In my experience, using an ARM+FPGA chip, lockups could only be fixed by cycling the power leaving you ever wondering what had caused the problem. Part of the problem is that the AXI standard has no way of recovering following a dropped response other than a total system reset. As I've implemented Wishbone, you can just adjust one wire (the cycle line--but that's another story) and start over. You can even use a timeout to clear the bus if a peripheral has not responded within an expected period of time. Not so with AXI. AXI is so difficult to work with that not even Xilinx could get it right. (See the links above) When I first discovered these bugs, I wondered that no one had found them before. For example, two writes in a row would lose a response and lock up the bus if ever there was the slightest amount of backpressure on the return channel. (Something Wishbone doesn't have to deal with, since there's no way to stall a Wishbone acknowledgement) It would seem as though very few individuals ever simulated their cores with backpressure (i.e. either BREADY or RREADY signals low), and so they never noticed these bugs. Similarly, some configurations of the interconnect might trigger the bugs while others wouldn't. Imagine adjusting the glue that holds your design together only to find your design starts failing. What would you blame? The interconnect, right? When in fact it was their demonstration core logic at fault that everyone was copying. I've now fielded several questions in the last several months alone on Xilinx's forums from users who've struggled with these bugs. If you do searches, you'll discover that folks have been struggling with these sorts of problems ever since Xilinx started using AXI. In one recent post, a software engineer posted that his FPGA engineer had left, leaving them with a "working" design. He then adjusted the software within the design and the whole design now froze any time he tried to write to their special IP core twice in succession. I'm hoping Xilinx will fix these bugs (soon). I haven't checked their latest release since reporting them, but I do expect them to fix the bugs in the near future. It's not just Xilinx either. I'm currently verifying the (ASIC) soft core of a major (unnamed) vendor. Much to my surprise, despite a team of highly paid professional engineers working to produce this amazingly complex core , and despite the fact that they created a simplified subset of the AXI interface standard to work with ... they still didn't get the AXI interface right. Realizing how difficult this was, I tried to simplify the task by creating a couple of cores. One showing how to build a bug-free AXI-lite slave (link above), another showing how to build a bug-free AXI slave (link above again). I also shared an AXI bridge implementation that, if you place your core downstream of it, you'd be guaranteed to meet the AXI protocol--even if it slowed you down a touch. I also shared the code for verifying that an AXI-lite component works--you are free to try it out yourself to know if your core still works after changing it. If you like using Wishbone, I've posted an AXI-lite to Wishbone bridge, or even a Wishbone to AXI bridge in case you want to access your DRAM memory. I also think you'll find that all of these cores, save perhaps the bus fault isolator core, will have better performance than Xilinx's logic ever had. Whether or not you use these options (or give up on AXI as I've tried to do) ... well, that's up to you. Forget what the sales brochures tell you, we aren't playing with legos here. There's more required to hook things together then just plugging them into each other--especially if you want something that works reliably when you are done. Just want something simple? Learn Verilog or VHDL. At least then you'll be the one responsible for your own bugs. Dan
  25. D@n

    Custom IP

    @PoojaN, Welcome to the wonderful world of graphic design. I avoid it like the plague, since I worry that it conceals key details from the beginning designer--but that's another story for another day. You can read how you can go about creating an AXI component here. You can then tear the guts out of that component and start over with something that works (Xilinx's was broken last I checked), perhaps something like this one. If AXI is too complicated for you, feel free to try AXI-lite. Again, Xilinx's demo AXI-lite core is broken but you can find a non-broken one here that you can use. That said, the process is similar. As for the GPIO core ... I think it's intended to connect to external ports only, with the feature that the I/Os can be redirected on command to be either inputs or outputs. Personally? I wouldn't use it. The interface offered by the two cores linked above would be superior if you can use it. What do I mean by superior? I simply mean that there's been more than one person disappointed at how fast they can toggle an I/O from a CPU. The AXI GPIO core takes 5 clocks just to toggle an LED, in addition to any bus delays you might struggle with. Dan