D@n

Members
  • Content Count

    1913
  • Joined

  • Last visited

  • Days Won

    136

D@n last won the day on August 29

D@n had the most liked content!

About D@n

  • Rank
    Prolific Poster

Contact Methods

  • Website URL
    http://zipcpu.com

Profile Information

  • Gender
    Not Telling
  • Interests
    Building a resource efficient CPU, the ZipCPU!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @chaitusvk, Here's my puzzle --- while I see several (common) bugs in your core (Vivado's AXI-lite demo core has known bugs in it), bugs I'd like to discuss and share, I haven't yet found the bug causing the problem. Let's try this ... set reg_data_out = -1 (independent of the axi_araddr), and let's just verify that you can read from your core in the first place. I'm suspecting a couple of things, one possibility being that Vivado hasn't noticed you update the design with the adder, and so it's still building the older design. (I think "run design automatiion" might help there ...) Anotherr possibility is that you are accessing your core via the wrong address. If you can set an LED from the core on any write (if (S_AXI_AWVALID && S_AXI_WVALID) led <= !led;) that would also help determine the same thing. Dan
  2. @chaitusvk, Try reading back your adder inputs after writing them. xil_printf("Inputs to adder: %08x\n", Xil_In32(XPAR_MYIP_ADDER_0_S00_AXI_BASEADDR)); What do you get there? Dan
  3. @chaitusvk, No, I got that much. What values are you adding that are giving you the wrong answer? When you add 1+1, you should get two, right? When you add 7+7 you should get 14, right? When you add 3+1 you should get 4. Are you getting these answers? Similarly when you add 0x53 + 1 you should get 84. Is this the result you are getting? Dan
  4. @chaitusvk, You haven't said what your problem was. What is it your design does that makes you believe it doesn't work? Dan
  5. D@n

    Basys 3 implemented design

    @Chirag, I currently only have one digital design tutorial. (Digilent has more.) It does not go into "how and why to use the (Xilinx) primitives". The problem with using the Xilinx primitives (if you don't need them) is that it renders your logic specific to a Xilinx device, so if you ever switch to either a Lattice, Intel, or an ASIC design flow, or even an open source simulator, you'd need to start over with a new set of primitives. The more you can do without them--the better. That said, I do use vendor specific IO primitives--especially when I need high speed I/Os. As for enabling DSPs in your design, I'd recommend using a programming construct (from Verilog) that looks like: always @(posedge clk) if (enable) result <= ina * inb; Anything more than this may not necessarily map to a DSP. I'd also strongly discourage you from directly instantiating the DSP primitive: the interface isn't necessarily obvious, and the number of options are ... too numerous and confusing (IMHO). If you can instantiate the DSP by a simple statement, like the one above, your code will be easier to read and it will likely work with other tool chains as well. Dan
  6. D@n

    Basys 3 implemented design

    @Chirag, What you are looking at is an image of the underlying silicon die within the chip, showing the layout of the various parts physically. The six big squares are clock regions IIRC. You can save some power and some wires if you can restrict a clock to a given region. That said, I tend to run all my designs with one giant system clock, so this hasn't helped me much. In your example above, the LUTs being used are in the middle row and colored light blue. There's more there than just LUTs, but that's where the LUTs are. What more is there? I can't be certain, but examples might be memory and DSP slices. If you zoom in on this area you'll see sub-pictures showing four LUTs to a slice, the FFs in the slice, and possibly the various mux-7 and mux-8 components as well. (It's been a while since I've done so) The I/O banks are all on the edges of the design. You can use them via the various IBUF, OBUF, IOBUF, IDDR, ODDR, ISERDESE, OSERDESE, etc. primitives. A wire going to or from an I/O port will naturally use one of these, or they can be manually instantiated as well for more control of what takes place. Yes, you can change cell locations manually, but knowing what you are doing enough to be successful doing so can be a real challenge. I know some folks who have hand placed their designs to great effect. I also know of at least one open source project to do the same. Indeed, at one time I tried to build a placer to map logic to FPGA elements--my efforts didn't work out so well. That said, I know my own limits and don't try to move components around myself. Dan
  7. @Rockytop, I know @jpeyron helps customers with this sort of design methodology. (I don't normally use it. Mentioning him like that will help bring this to his attention ...) Dan
  8. Does the "valid" path have spaces in it? As I recall, Vivado had a problem with spaces in board paths. Personally, "AXI-GPIO" is not where I'd recommend you begin. I'd recommend you begin here, but that's another story entirely. The problem with starting at AXI GPIO is ... there's so much background you don't have in order to debug things when you start there. To provide a perspective, you might be curious to know that ... I've never used a "board file". Dan
  9. @Rockytop, Let me echo @xc6lx45's comment that if you can't pinpoint the bug, then your design methodology is wrong. Indeed, not knowing what the bug is within your design is a common FPGA problem, and there are several tools to keep it from happening. The most obvious tool is "deskchecking". When you desk check your code, you read it over again to yourself to see what's wrong with it. If you can find a bug, cool. The problem with desk checking, though, is that you can't always find the problem when doing so. The next tool is "blinky", this is the one @xc6lx45 has recommended. It's a great beginner tool, and will often help you find a lot of those early bugs. It's so powerful, that the first thing I do when I bring up a new board is to get an LED to blink. You can then use that LED and whatever pattern it blinks to find bugs in your design. The problem with "blinky" is that you only get a single bit of information, and that single bit needs to be slowed down to the point where you can recognize it. It doesn't scale well. The easiest way to debug a design is by using the simulator. Sure, it's slow. Sure, it takes a bit to set up. But a trace from a simulator will give you access to every single wire within your design over the course of a simulation run. If you've come from the software world, you may find that (in many ways) it's better than a debugger. That said, traces can get rather long, and finding bugs and tracing them back to their source with a trace visualization tool can be a real brain twister. The biggest downside to the simulator is that it doesn't necessarily simulate any hardware your design might be connected to. Your simulation will only be as good as your simulation of the hardware you are working with. Still, any time I can get a bug to appear in the simulator, I consider it a success. A painful success, but a success nonetheless. Only a bit harder than the simulator are internal logic analyzers (ILA). These are very powerful debugging tools that you should also learn to use early on. An Internal Logic Analyzer of a soft-core that you add to your design--i.e. you pay for it in terms of fabric on your chip that cannot be used for your user design. Because your design has to move over to make room for the ILA, you can't record everything. When using an ILA, it's important to pick the right signals to capture, the right trigger to start the capture, and the right size of the capture. Also, because the ILA is added to your design, if you don't pick the right signals or the right trigger etc., you'll have to rebuild the design to try again. These are the traditional tools of the trade in the FPGA world. Using these tools, you should be able to hone in quickly on whatever bug you might have to see why you are having the bug. There's also a newer tool available to the FPGA design engineer, formal verification. There's an open formal verification tool called SymbiYosys that can formally verify Verilog designs. There's also a version that does both SystemVerilog and VHDL, it's just not free. Formal is similar to simulation in that it's a problem you can run on your desktop, and it will give you access to every register within your design. It's different in that, unlike simulation, it doesn't examine a single trace through your design. Rather, it examines every possible set of inputs to your design in a breadth first search for a bug. As a result, it finds more bugs than simulation does. Indeed, simulation can be shown to "mask" bugs, whereas formal tends to find them anyway. The problem with formal tools is their great strength: because they work on a breadth first search, they have a (rough) exponential complexity. This limits the size of the design you can apply them to, and the length of time you can apply the tool for. Even with that limitation, however, formal is known for finding all kinds of bugs simulation misses. You can read more about formal verification on my blog, zipcpu.com. You can also try out my tutorial--it goes through teaching simulation and formal verification from square one (this is a wire), all the way through serial ports. Hope this helps! Dan
  10. @saif91, I like your picture. You've picked a wonderful choice for many reasons. From your comments above, though, it sounds like you are stuck in FPGA Hell. Your biggest problem is not that you don't know where your bug is though, your biggest problem is in your design process. Desk-checking like this is really the wrong way to debug a design. Why so? Because with desk checking alone, you aren't guaranteed to ever find your bug. This was one of the reasons why I wrote my beginner's tutorial--to introduce a beginner to a better design process from the very first lesson. That said, I found the first bug in your design. This is what's known as a "Logic clock". You should never create "logic clocks" in your design. 1) the tools don't know how to handle them, 2) such clocks rarely get promoted to the global clocking network where they belong, 3) leading to subtle and uncontrolled timing violations within the design that the tools may (or may not) detect, and 4) it tends to hide/obscure clock-domain crossing issues. (Have you seen my list of "rules for beginning designers"? PROCESS (clk) BEGIN IF (clk'EVENT AND clk='1') THEN pixel_clk <= NOT pixel_clk; END IF; -- ... You really need to get this to work in a simulation first. Debugging is just *so* much easier in simulation. (It's even easier with formal methods ....) Dan
  11. @chaitusvk, You might find the GPS integration to be a bit of a challenge. It's doable, but it will be a bit of a challenge. I was able to synchronize my Arty A7 board to within about 250ns or so of true GPS time. as reported by the PMod GPS. Suppose we try this: Generate a 32-bit counter (could be more bits, could be less), and increment it so that it rolls over 1000 times a second. (Don't count from 0 to any number other than 2^(N-1)--it'll make the next step harder) Also, generate a second signal that is one every time the clock is about to roll over. (Might be easier to delay the counter by one, and then use: { o_stb, counter } <= counter + increment; together with o_counter <= counter) Use the top N bits of the counter in your counter to generate a sine wave via either a cordic or a table lookup. If using a cordic, use the o_stb value from earlier to enable a logic circuit that selects between two magnitude values, using them as input to the CORDIC If you are just using a table lookup, then multiply the output of the lookup by the same magnitude value That should get you pretty close, no? Now, if I wanted GPS synchronization, I'd start with a core I'd written for that purpose. This core keeps track of the timing since the top of a GPS second in a 2^32 bit counter. If you multiply that counter by 1000 rather than adding the increment to it as we did above, then you should have a 1kHz tone synchronized to GPS. Dan
  12. @TomF, Ok, so ... the flash works, the board works, but the SDK doesn't? This sounds like one of the SDK's files is still stuck thinking you are working with the old design, and that it hasn't gotten updated at all. Hence, I'd return to @jpeyron's advice and recommend that you rebuild the project, to clean out any undocumented values contained in any Vivado/SDK generated files. Dan
  13. @chcollin, If it helps at all, when I implemented HDMI on my own, I built an I2C core to handle the EDID. Yeah, I still used a ZipCPU, but mostly to copy the downstream EDID info the the source port, so that the EDID info I produced would match that of the downstream monitor. To be a bit clearer, my setup was: RPi -> Nexys Video board -> Monitor, and I wanted the RPi to be able to read the monitors EDID values. Both EDID components are posted on line, and you can find them as part of my project here. Dan
  14. @chcollin, I'm curious as to why a MicroBlaze was used for the EDID I2C connection. That seems like a rather heavy-hitting solution for a really simple interface. Are they doing more than just implementing the interface? Adjusting HDMI timing (pixel clock, screen size, etc.) for example? Or does the design use the same settings regardless? Just curious, Dan
  15. @TomF, One other quick question: If you load your design in SPI x1 mode rather than SPI x4 (QSPI) mode, does anything change? The Spansion flash chip has an internal bit within it that needs to be set before it will enable QSPI mode, so this might help. Others on Xilinx's forum(s) have struggled with missing this as well. Dan