• Content Count

    2164
  • Joined

  • Last visited

  • Days Won

    147

Everything posted by [email protected]

  1. @bill maney, You shouldn't need to place CCLK_0 pin in your XDC file at all, neither should it be mentioned in the port list of your top level. It's defined implicitly by the startup usage that already exists. Dan
  2. @kringg, Two insights: 1) Your resetn signal should be registered. A combinatorial input there might cause the MIG to reset when you aren't expecting it. 2) To avoid crossing clock domains, your user interface signals (write/read address, data, etc.) should all be in the ui_clk domain and not the clk100 domain. Dan
  3. @zygot, The "no buffer" setting is part of the MIG set up. It has to deal with whether or not you are passing raw clock signals straight from the input pins to the core and so the core should instantiate a buffer, or whether the wire has already been through a buffer and so the MIG IP doesn't need to instantiate one. Since I've always passed my clock signals from a PLL (under my own control) to the MIG, I've always had to set the "no buffer" flag. Incidentally, if you don't set "no buffer", it wants to know what pins to connect the clocks to directly. It's not the clearest interface out there, but it's what we've got to work with for the time being. Dan
  4. @bill maney, Yes, there's a way to do it, but you'll have to use the STARTUPE2 primitive. Basically, that pin is "special", because it needs to be used to load the FPGA in the first place. Since special hardware is connected to it, you can't connect to it through logic quite as easily as you might like. That will also prevent you from using other I/O primitives on the pin like OSERDES, ODDR, etc. (they don't exist in the actual hardware for this pin). Here's an example of what driving that pin through the STARTUPE2 primitive might look like. You might find this example of a home-grown QSPI flash controller useful as well. Dan
  5. @wyingst, Thanks for the clarification! I was afraid you'd vanish and not complete the story You are using the native interface to the core. I haven't used that interface (yet), so I don't have good working examples of how it works. All my stuff uses the bloated AXI interface. If you or someone else manages to build a good example of how that native interface works or should work, I'd love to review it. I just find the specification of the native interface to be light on details. Dan
  6. For all the forum users who might find this post ... @wyingst was able to get the DDR3 SDRAM working. Some key lessons learned: The reset into the MIG core needs to be initially set and then synchronously released with the clock input to the core External resets, such as those from buttons, need to be synchronized before use. Even better, they should be debounced, but debouncing wasn't (yet) the issue. The design that uses the MIG core needs to use the clock and synchronous reset outputs from the core for its logic, not the clock that was sent into the core. (This wasn't yet he bug, but it was going to be next.) Dan
  7. @wyingst, No, that's not a proper reset synchronizer, and you'll want to use clk100. If I remember right, the reset into the core also resets the PLLs that then generate the system clocks ... so you can't synchronize the reset using something that will be cleared on a reset (the clock). You can find an example of a VHDL asynchronous reset synchronizer among the sunburst design papers by Clifford Cummings. Check out page 16-17 of this paper for a good example. Dan
  8. @wyingst, Your MIG configuration looks good. I didn't check the actual pinout --- but I'll trust you on that for now. A couple of comments: It looks like you are pulling the system reset into your design from an external port, but without guaranteeing that it is A) valid for at least a couple of clock ticks, and b) has a synchronized release. Xilinx I/O primitives can be finicky about this. While this isn't your current problem, it will be your next problem: When using the MIG controller, you need to use the clock that comes out of it for your design. (Not for the reset in the last step--lest you get yourself into a loop) So, your logic should run off of ui_clk and ui_clk_sync_rst rather than clk100 and reset_n. Hope that gets you closer, Dan
  9. D@n

    ADEPT

    You should be able to program the Basys3 directly from Vivado using the hardware manager. Dan
  10. @PapaMike, @zygot has just reminded me that there's less space in between our views than you might imagine. I actually have a mental note left over from our last discussion that I want to sit down with him and learn some more on this issue, so I really need to wait for that conversation. Perhaps once this virus passes we'll have a chance to share a glass at some table again. Either way, we're both grumpy old men who are just completely tickled pink that someone's willing to listen to us. "Verification is difficult to do well" is something I'm reminded of every time I end up chasing a design bug in hardware. It happens more often than I want to admit, although I've found that for some strange reason folks like listening to my stories of all the bugs I've ended up finding after the fact. Perhaps we should just chalk it up to the fact that kids all like to hear good "war" stories from those who've gone before them. Dan
  11. @PapaMike, There's a whole world of open source support out there, to include simulators. If you really want to stick with VHDL, you might want to look into ghdl. Beware, though, there's a lot of bugs that won't manifest in simulation at all! You can read about some of the ones I've missed here or even here again. For these reasons, I've switched to using formal verification for most of my (initial) bench testing needs. There are free formal verification tools out there, but the open source VHDL support is still pretty new. I use Verilog, and I've loved it. Oh, and if you use Verilog, you can get access to the fastest simulator on the market. (Just don't tell @zygot I'm recommending a cycle-based simulator.) Here's my favorite list of (mostly) open source tools that might help get you off the ground. Dan
  12. 🤣 🤣 An Artix-7 35T device has an amazing amount of logic on it as long as you 1) don't fill it up with the DDR3 memory controller, 2) a MicroBlaze CPU, and 3) an AXI interconnect, to the point where there's nothing left to play with. Logic bloat is a real thing, and many of the example demonstration designs suffer from it. While I used the DDR3 memory controller in the designs I have that require a DDR3 interface, it does require a sizable amount of the programming logic on the device in order to implement. (20-30%% perhaps? I don't have the number in front of me) Dan
  13. @PapaMike, Welcome to the forums, and welcome to digital design! As @zygot mentioned, I have done a lot with an Arty board. I also have the Nexys Video, and I'd recommend it for anyone interested in video. I also share @zygot's criticisms of the SOC (Zynq) chips. The tools just aren't really there to support them yet IMHO. 1) They require that you interact with your design components using a very complicated bus, one so complicated that both Xilinx and Intel messed up their demos. (As of 2019.2, the automatically generated demos are still broken too.) 2) These chips very much depend upon on-board software/hardware integration. Sure, I can understand why this might be a good thing--you can do more faster/better/cheaper if you can put the CPU on the same hardware as your logic. It gives you an awesome high bandwidth link between the programming logic and the CPU. It's also a very difficult link to debug. Indeed, most of the help requests I've examined are from individuals trying to figure out how to debug a design where all the key details are under the hood. Worse, as I just mentioned, 3) Xilinx's board design methodology tends to hide the bugs within your design in places where they become much harder to find, identify, and fix. I've also slowly been building a blog based upon the premise that hardware debugging is hard, and very different from software debugging. From what I've discovered, there are a lot of teaching materials out there that will teach you how to design hardware in VHDL or Verilog, but very few teaching materials that will teach you how to find or fix the bugs in your design. I've therefore tried my hand at writing a tutorial that will go over not just design, but two basic debugging techniques as well: formal methods and simulation, while walking the student through a series of basic designs. So far, the tutorial has been well received enough that I'm now working on a second tutorial on how to deal with designs that interact with some kind of bus. Most of that tutorial has been focused on the Wishbone bus so far, but there is a lesson on AXI-lite and I expect more to come. As for your PDP work, do you know that there are some PDP simulators on OpenCores? Last I remembered, they'd been using the Artix-7 to emulate them. There's also a very lively FPGA gaming community that's been resurrecting the ancient hardware game consoles. I just haven't (yet) had the opportunity to get involved in that world--even though it sounds like a lot of fun. I'd love to introduce my kids to Donkey Kong, Centepede, Crazy Climber, Moon Lander, and so much more ... You should be aware there are other forums out there. I really like Digilent's forums for beginners, but Xilinx has a forum as well. The questions there tend to be more complex than those here, and the response rate seems to be much lower as well. There's also an FPGA Reddit that may be worth looking into, and even an ##fpga channel on Freenode's IRC server that's fun to pay attention to. There's often very good live discussions there, and plenty of people asking for and receiving help there real-time. (Not always, sometimes you might have to wait 24hrs for a response ...) There's also a lot of active development into building open source tools for synthesizing, placing and routing, and loading designs, so that's something you may wish to pay attention to as well. I have yet to use those methods on Xilinx devices, but so far they've worked pretty well for me on Lattice devices. Either way, welcome to the forums, and I look forward to hearing good things about how your design methodologies are going. Dan
  14. @nkraemer, There's an ugly bug in the Xilinx Ethernet-Lite core that could show up as an intermittent bug under heavy usage. Is this the core you are using? Dan
  15. @Jonas_C, Welcome! The only "stupid" questions are the ones not asked. As for strange grammar, if you are going through a translator then know that you aren't alone. Again, welcome to the forums! Dan
  16. @RJ16, Let me try this again then. Using Euler's formula, we know that cos(2pi ft) = 1/2 e^(j2pi ft) + 1/2 e^(-j2pi ft), and sin(2pi ft) = 1/2/j e^(j2pi ft) - 1/2/j e^(-j2pi ft). The FFT reports results in frequency bins, BIN = f * 1/N. To map from cos and sin to complex exponentials is simply to do the mapping just described, on paper (not FPGA), to get an understanding of what the FPGA is supposed to return to you. I suppose you could. I've never done so. If you did so on an N point FFT, than log_2(N) should be the number of bits in TUSER. Let's talk bits now for a moment. If you have B real and B imaginary bits going into this FFT, then IIUC you should have B real and B imaginary bits coming out. At full bit-width, you should have B+log_2(N)/2 bits coming out--but most implementations truncate this and scale a bit every other stage to keep it so that you have B bits going in and B bits coming out. That means that there'll be a scale factor within your FFT that you may need to know about. But, assuming you don't have any overflows and you have the scaling right, then B bits come out of the FFT for each real and each imaginary component. This bit width is independent of TUSER. If you square these bits in order to form a magnitude, you'll end up with 2B bits (for the real value squared) and 2B bits (for the imaginary value squared). Adding these together to actually get your magnitude should give you 2B+1 bits. The number of bits in your magnitude will be completely independent of the number of TUSER bits, although if you aren't careful the two will be misaligned. Dumbness? You sound quite willing to learn. That's not dumb. Beginner, yes, dumb, no. I would recommend abandoning block design as soon as you can. In my own humble opinion, it doesn't translate well to real design and you just learn a lot of bad habits along the way. If you are at all interested, I'd recommend you work through an online tutorial--this one for example--and learn sooner than later. Dan
  17. @RJ16, To know the effect on the FFT magnitude of using just a sine vs a sine plus a cosine, try mapping the inputs to complex exponentials first. For complex exponentials of equivalent magnitudes, the magnitudes at the output of the FFT *should* be the same. Be careful of overflow in this calculation. With only 8-bits, you don't have a lot to work with. I don't normally use Xilinx's slice primitive. However, if you wanted to separate out the bits from an FFT output, I'd be surprised if you didn't have to specify which bits of the slice you were actually interested in. I'm not sure I follow your third question. Why would TUSER have any useful information in it at all? Here are some other things you might be interested in though. Here's an approximate log-magnitude block I built some time ago. It calculates FFT magnitude first. I recently wrote about debugging AXI streams, such as the FFT would either produce or consume. You might find the article and associated code valuable. Hope that helps! Dan
  18. @Ankit Kumar, I don't get it. Why would you need an SD card? That sounds like an application dependent thing. You might want to ask if Digilent is now supporting the PMod WiFi for it's FPGA's. The last I recall, it was only supporting them for the microcontroller boards since the interface for them was proprietary--but that information is a couple of years old by now. Dan
  19. @zygot, Have you looked at the ULX3S board at all? It comes with an integrated antenna for 433MHz already designed into the board. As far as I can tell, they just wired it directly to the board. I'm not sure if they intended to use it for receive as well as transmit, but sure enough that's what they built. As for receive vs transmit, transmit is the easy one. No need to worry about voltages or voltage standards, just dump the pin directly into the antenna. Oh, wait, you wanted it to get good performance? Now that's another task. Dan
  20. @Antonio Fasano, Not a problem. You can find it on github here. It's part of a scrolling raster demonstration for a Nexys Video board (comes with HDMI out) and a PMod MIC3 attached as a microphone. There's actually two projects in that repo--one that uses a DDR SDRAM and HDMI, and a second that contains (an unreasonable amount of) block RAM and outputs VGA. The design works by sampling data coming from the microphone at 1Msps, downsampling (very significantly), windowing, applying an FFT, squaring the results, calculating an (approximate) logarithm, and then writing the results to RAM. A second part of the design (present in both projects, of course), reads from the RAM and produces a VGA (or HDMI) output. One of the key tricks to the design is that there are two images maintained in memory. That's how the "scrolling" is handled--the pointer to the memory image is just moved over time to make it appear as if the screen scrolls from right to left. Also, if you do Verilator and GTKMM, the C++ simulation/test bench will draw the scrolling raster of a swept tone onto a window on your machine. It was a fun project to build and test. Even better, other than the SDRAM (which you could break at the internal AXI interface ...), it's all RTL so it shouldn't depend upon any Vivado versioning hell. Just a thought, Dan
  21. @Antonio Fasano, I do have an HDMI project, but ... it's doesn't use any Xilinx IP's beyond the memory. It doesn't use any CPU's, so I haven't offered to you before. If that's something you are interested in, then we can chat further. Dan
  22. @Antonio Fasano, If you use linux would you have access to XYZ? That I don't know. Let's see if someone else on the forum can provide those answers to you. Dan P.S. I don't even work here ...
  23. @Alexzxcv, Could the PHY be faulty? I work on hardware designs for a living. I have to allow that any component might be faulty. It's just part of the job description. You might also want to check that your network even supports 100M mode. If none of the other devices on the network support 100M mode, you might need to work in 10M mode. Check the MDIO registers to see what speed gets autonegotiated--that should tell you something there. You might also wish to check whether you have your I/O's mapped to the proper ports. I'm not really all that sure how they might map to the wrong ports--it's at least a good guess as of a second place where you might start debugging. Dan
  24. @Antonio Fasano, I'd assume the embedded Linux forum. You are using Linux on the PS, right? If this were a straight HDMI project, w/o the Linux or the Xilinx IP in the middle, I could probably offer some worthwhile help--otherwise let's see what advice you get there. Dan