• Content Count

    2164
  • Joined

  • Last visited

  • Days Won

    147

Everything posted by [email protected]

  1. @jean, That might make the task *much* easier. Consider (instead) building your signal of interest at 50MHz (or so), and then multiplying by a sine wave whose wavelength is an integer multiple of your oversample factor. For example, if you are oversampling by 32:1, multiply by a sine wave with four periods per every sample. That'll get you up to speed. Smaller frequency offsets can be handled in your original 50Msps domain if necessary. This operation will also introduce some aliasing products, just not from the sinwave. Whether or not those aliasing products are material to your application or not is another question. The alternative would be to first upsample, then multiply by the sinewave. Building a linear upsampler isn't that hard. Neither is a quadratic. Beware, though, that any upsampler you build will have an impact on the passband of your signal. Something to think about. I should also point out ... up and downsamplers aren't that hard to build. Limiting yourself to Xilinx's tools would be .... well, limiting yourself. Dan
  2. @jean, The answer to this question really depends upon your application. What kind of waveform will you be sending? What is its bandwidth? Do you just intend to transmit a narrowband tone? A carefully crafted pulse? A narrowband (<100MHz) communications signal? Or an ultra-wideband signal? Each application needs a different answer, and has a different associated cost. Dan
  3. @jean, If this is the kit, then it advertises eight 6.554Gsps 14-bit DACs. Assuming you can get the outputs up and running at that speed (there should be a demo of this to get you going), you'll probably want more than just a table lookup for the sin/cos generation. A table lookup is good for about 8-bits or so. I like using linear interpolation to get more than 8-bits. You'll want to calculate multiple sinewaves at once, each at a lower rate, and then output all of these in series when sending to the DAC. To do this, you could either use the DDS compiler or build it yourself. One of the links I've already posted above will discuss how to handle generating multiple (related) phases at once--something you'll need to make this work. Dan
  4. @jean, Let's start with the DAC then. What's the maximum sample rate, based upon its data sheet, that it can produce? Second question would be, how many bits wide is it? That should settle a lot of our questions. Dan
  5. @zygot, I ran a quick search for what frequencies a Xilinx FPGA might support. The MPSOC manual I have suggests support for 100GHz outputs. While this seems far fetched for me, I haven't dug into the statement to see either how they are managing it or how to set up that speed under the hood. If 100GHz is doable, surely 2.6GHz is as well. Now, 2.6GHz into an 8-12bit A/D? That'd be more challenging--but I noted that above. Perhaps that's not required--that'd be determined by his specification--which he hasn't (yet) shared. Not having a specific FPGA to have a discussion with or about, it's hard to discuss hard numbers here. @jean's question was not about how to go about outputting a signal this fast, but rather how to replace Xilinx's DDS in order to generate such a signal. To that question, I believe I've provided a reasonable answer. Dan
  6. @zygot, One other thought ... back in my service days, I remember a contractor presenting us with a proposal to build a receiver that would sample data at 8GHz and then bring that data into an FPGA. My point is: I think it's doable, but as I said above, I wouldn't recommend this problem to a beginner. Of course, we also had a separate problem with the whole concept: suppose you swallow 8Gsps of data into an FPGA ... what do you do next with it? Kind of like the picture below, no? Just because you can get an 8Gsps signal into an FPGA doesn't mean you're ready to do anything serious with it. Dan
  7. @zygot, I haven't mentioned you or your comments yet in this thread because I felt they were right on and didn't need any more mention. I have had no disagreements with what you've had to say so far. I still don't. I've spoken from a generic standpoint--if you want to go faster, you need to parallelize things. There's nothing rocket science about this, but there are costs to doing so. The links I've posted discuss how to handle the task in general, from which you would need to build parallel logic to handle things. Here, again, you have said things well. This was why I was trying to get @jean to describe how he was going to handle his I/O's. Running anything at 2.6GHz would be a serious challenge. I'm just not prepared to say it's impossible--that's just a harder statement to make conclusively and I've been surprised too many times over the years. He's going to need to find an A/D that can run at this rate (somehow), and even then he's going to have a challenge to drive it. It'd be much easier to just drive a simple I/O pin (something I referenced above). Would that work for him? I have no idea--he never provided a specification for how good his sine or cosine needed to be. If you would like to criticize the statement from the mentor above, then please tell me what part he got wrong so I may learn? The comment I tried to quote above was about general logic design, not the I/O design that'd be required to handle a 2.6 GHz signal. I know I've managed to handle logic design at 200MHz, and have no problems believing that you could parallelize operations at 200MHz in order to handle generating a 2.6GHz signal--at least, that'll work until you get to the I/O design. From the standpoint of parallelizing logic, this is nothing more than a straightforward observation. The other half of the problem--that of handling the various I/O's and communicating with something useful--that's a valuable and even necessary piece of the puzzle that you've been bringing to the discussion. I have appreciated it, but it doesn't reflect on the mentor's observations above since those were general logic development observations. Dan
  8. @jean, Help me out here, what is "the IP DDC DUC from xilinx"? and ... Why is it that it cannot help you? Dan
  9. @jean, This is the perfect place to start a journey. Just be prepared to make a lot of mistakes along the way. I've certainly enjoyed my own journey so far. One of my FPGA mentors once told me: Pick a frequency, and build everything for that frequency. I have chosen 100MHz on a Basys3 (series-7 Xilinx FPGA), and so everything I tend to build runs at that frequency. This translates to (roughly) 80MHz on a Spartan 6, 50 MHz on an iCE40 HP*, 25MHz on an iCE40 LP*, and perhaps 140MHz+ on a Kintex. With a little work, I can build a design for a Xilinx FPGA at 200MHz, but it'd be a *lot* of work to retool everything I've ever done for this speed--not that I haven't thought about it. So, let's allow a 200MHz clock as a reasonable logic speed. Using a 200MHz clock you can generate a sine wave internally at 100MHz. If you want to go faster, you'll need to run multiple sine wave generators in parallel. You will pay for this in terms of area on your FPGA, but it's often quite doable. Theoretically, if you could bring a 2.6 GHz signal into an FPGA (you'd need to sample at 5.2GHz or more), you could process 32 samples at a time and so handle the issue. You'll still have to deal with a nightmare I/O problem, but it's not beyond the realm of the possible. It might just be a lot cheaper to build some analog up-front processing first ... You should also know that Xilinx's DDS is not the only solution out there. At this speed, you might want a basic table lookup instead for your sin/cos generation. Phase processing is still pretty easy, so don't get attached to Xilinx's DDS for that--it's pretty easy to do on your own. Dan
  10. @jean, This article might give you some things to think about. But as @zygot hinted, you are getting at a very hardware dependent portion of the design space. You are also getting towards the limits of what the FPGA can accomplish in the first place. There will be a lot of tricks required to design a system like this properly. If this is your first design, you should probably think about doing something simpler and working towards something this complex. Dan
  11. @jean, Might depend upon how much quality you need in that 2.6GHz output, and how you intend to output it. Most FPGA I/O's won't work at that speed. Keeping multiple I/O's together, so as to drive a high quality A/D, might also be a challenge. Just outputting a signal that toggles at that speed--not nearly as much of a challenge. Dan
  12. @tcmichals, Thank you. That's news for me, I'll have to check out those links. Dan
  13. @amenah89, FPGAs are not computers in general. They don't run software, and so they aren't programmed. They can be configured with a hardware design. This is done using Vivado. If your chosen hardware design uses a CPU, such as a MicroBlaze CPU, then you can program that CPU within your design. This is now done with Vitis, not Vivado, but you will need a CPU design within your FPGA design to do it. That said, CPU's are notoriously slow for processing video, so I don't think you would want to do that at all. The first problem you need to solve is to answer the question of how you will get video into your FPGA board. Which board do you have that has a video input signal to it? That will help us get started. If your board does not have video input hardware, I suppose you could do some creative board design and build one, but it would be beyond the scope of what I might be able to help with. So, let me repeat the questions from above, which board are you using and how do you intend to get video into your board in the first place? That will determine how to then answer the rest of your questions. Dan
  14. @HAMZA, I'm not sure why you are yelling (all caps). It's generally not a good way to encourage someone you don't know to help you out. That said, @hamster posted the link for his work above. I found the edge filter within it without any problems. Dan
  15. @newkid_old, @tcmichals, There is no ARM processor on the Artix-7. The ARM processor is a piece of physical real-estate that is present within a Zynq FPGA, but not within an Artix-7 FPGA. The Arty-A7 boards are built around the Artix-A7 FPGA's, and so the processor isn't present to be configured in the first place. The ARM processor is very different from a MicroBlaze processor. The MicroBlaze processor may be built out of the fabric of an FPGA--out of look-up-tables and flip-flops. Unlike MicroBlaze, the ARM processor is built onto the chip itself--if it is present at all. It shouldn't therefore surprise you if the tool can't allocate a (non-existent) ARM processor on your chip to fill the requirement you've created for it within your design. It's simply not present. Dan
  16. @Liur1996, I think my approach would be to first lock a PLL to a sampled copy of the 50Hz signal. You can then use the phase of the locked PLL to drive the sample rate. (PHASE * 64 == sample number, when it changes--sample) There are a lot of pitfalls associated with doing this, and I think @zygot is subtly hinting at some. To answer @zygot's why question, let's assume that a nation's 50Hz signal is (broadly) driven from a highly accurate clock. This is close to the case in the US, although in the US the tracking is probably not good enough for driving a sample clock even though the long term stability is quite accurate. Another useful reason might be locking disparate A/D's across a large distance, without needing to pass timing from one A/D to another. Yes, there are still more pitfalls here--like getting sample to sample timing lock, so that's still going to be a remaining issue you might struggle with. Here are some of the pitfalls I've struggled with when working a similar problem---locking a PLL to a GPS PPS input: 1) Just because the incoming signal is perfectly accurate, your local oscillator may not be accurate enough from one 20ms interval to the next. (20ms * 50Hz = 1 ...) I've measured accuracy on the order of less than a us when using an Arty-A7, so it's probably good enough for a 3.2 ksps digitizer. 2) You'll need to offload phase errors from the PLL across the whole 20ms interval. I know I offloaded the entire phase error in my case immediately on the next PPS rise. It made the logic easier to build, but might not be good enough for driving an A/D. Dan
  17. @saran, You do realize the reason why you got no response from the FPGA forum you first posted in? It wasn't because you posted in the wrong subforum, but rather because you are requesting help for a non-Digilent board on a Digilent forum. Since this is a Digilent forum, those posting here all have Digilent boards of some type. This does include other Zynq boards, so you may get lucky to find a sympathetic user who knows your answer. You might also wish to try posting on the forum your Chinese vendor hosts for your board--if they host any at all. (I wouldn't know.) I'll post any other thoughts I might have on the FPGA forum where you initially posted. Dan
  18. @jfdo, Let's see ... the lowest typical guitar frequency is about 82Hz, and the next one up is 110Hz. Figure each frequency will span (at best) two FFT bins, and that you want two FFT bins between them. That means you'll want an FFT resolution on the order of about 7.5Hz, so you'll be doing about 20 FFT's per second of ... well, what's the highest frequency range of interest? 20kHz? So, 5x/sec you'll be wanting to do an FFT of 16k samples? Dan
  19. @shlomishab, Absolutely! This is the job of the engineer, to know how to break a project up into subcomponents and measure which of those subcomponents are at fault. This is also what makes FPGA design difficult: because it is a challenge to debug designs after they are placed on a circuit board. Indeed, debugging FPGAs is perhaps the hardest part of the job. It requires a methodology and discipline that isn't necessarily carried over when coming to FPGA design from other fields. Why? Because you can't "see" into the FPGA. Unlike software, where you can stop it with a debugger at every step and examine every variable, you can't do that with an FPGA. (You can do it with simulation ...) Worse, it's hard to even examine variables from within an FPGA at all. Here are some good rules of thumb: Your first step to debugging is what's known as "static" checking, sometimes called "linting" in the software world. I like to use "verilator -Wall" for this--but that only works with Verilog modules. Vivado will also present warnings to you when it synthesizes your designs. (Verilator's warnings are more usable ...) When things aren't working, look over the warnings. It might save you hours of debugging. *EVERY* module within a design should have a "bench test" of some type, some way of determining that it works before it ever moves forward. In the larger companies that I've counseled over the years, a bug "escaping from unit test" is a *BIG* deal that gets a lot of engineering attention. It happens, but your goal should be to keep it from happening--it just "costs" a lot more work to find a bug after a module has left unit test. I do most of my bench testing using formal methods. Others set up rather elaborate simulation scripts to find bugs in individual modules. The difficult part of building simulation scripts is that ... you can't always predict everything that will go wrong. Formal methods often help you pick out things that might go wrong. When you purchase IP from someone else, or otherwise acquire it, look for this bench test. Vivado tries to make this easier by building a simulation script for you when you choose to create a custom IP. I don't use this script. I don't trust it. Their simulation script misses too many bugs. For example, I know (and have blogged about) bugs in every one of their AXI slave examples--both AXI and AXI-lite, and their AXI-stream master is also broken. These bugs don't show up under their simulations, but they do show up under a formal methods check. SymbioticEDA makes a product called SymbiYosys which you can use to formally verify Verilog designs. They sell a SymbioticEDA Suite for handling SystemVerilog and VHDL designs. They've also posted a similar product called mcy (mutation coverage with Yosys) which can be used to determine if your test bench is good enough or not. That is, if some piece of logic gets mutated (i.e. broken), can your test bench catch it? Evaluating your test bench to determine if it is "good enough", therefore, is the purpose of mcy. Once *EVERY* component has been formally verified (or otherwise bench tested), and only then, should your methodology move on to an integrated test. Integrated tests are still simulation tests. &nbsp;Why? &nbsp;Because, in simulation, you can see every variable, every value, at every time step. Sure, it's slow. Sure, it's clunky. However, it's much easier to figure out what's going right or wrong from simulation than it is from hardware. Only after your hardware passes an integrated simulation test should you ever move forwards onto real hardware. In the digital design field, this usually means FPGAs. For some of us, the design work stops at the FPGA level. For others, it goes on from FPGAs to ASICs, but FPGAs are usually a good first step before ASICs. Debugging a design within an FPGA is usually much harder than in simulation, but with the tradeoff that the FPGA can run at full speed (or close to it), whereas the simulation cannot. In order to debug a design from within an FPGA, you'll need a special piece of FPGA logic sometimes called an In-circuit Logic Analyzer (ILA). I like to call them "internal scopes". This will cost you logic and block RAM resources within your FPGA. Using a "scope" you can capture a limited number of values from within your design. As an example, I might capture 32-bits for 1024 clock cycles and read them back later. Inside a devices with thousands of flip-flops and millions of clock cycles per second, this is like trying to drink the ocean through a straw. There's an art an science to getting the right values, and to capturing them at the right time. Sometimes even the scope fails. In these cases, I like to use LEDs to try to debug what's going on. Using an LED, you can often debug missing clock problems, problems with clock not locking, and more. Sometimes a scope helps, and Digilent's Digital Discovery has been invaluable to me. Returning to the idea of using graphics on an FPGA, feel free to check out my own video simulation here. Since that article was posted, I've written AXI versions of the various demonstrators. Once you can run your design in simulation, then feel free to try running it in actual hardware. Then, when/if it dosn't work, feel free to write back telling us what piece isn't working, whether it's not working in simulation or in hardware, and be specific--isolate the problem as much as you can, so we can then help you. Or if you can't isolate it, tell us what you've tried, and we might be able to offer suggestions--similar to those above. Dan
  20. @rashimkavel7, Welcome to the fun! I'll second @JColvin's responses above. There's lots of ways you can go about this. Don't forget proper engineering discipline: be sure to break the project into parts, and verify each of the parts separately before trying to do everything at once. Failing to do so seems to be the most common problem folks have when using FFTs. (Well, that and AXI-stream signaling, generating a proper clock, etc ...) I'm also not sure what you want with Linux support. Do you want to run Linux on the FPGA? Or do you want to interact with the design from a Linux host nearby? Both are quite reasonable, although the former is a bigger challenge than the latter. Dan
  21. @shlomishab, Can anyone help you? Perhaps, but you'll have to do some more digging into what's going on. From what you've given above it'd be hard to know where to start looking. Let me recommend you start by breaking the problem up in a proper engineering fashion. Break up the operation into steps, and then check for success or failure at the end of each step. Incidentally, this is much easier to do in simulation ... Dan
  22. @ank, Hard to say. Depends upon your algorithm. Are you intending to run the algorithm in the FPGA portion of the board? Does it use fixed point or floating point? How many multiplies will you need? etc., etc. Dan
  23. I think I'm also going to have to disagree here. I've written several FIFO's over the course of the years. Rather than trying to maintain several FIFOs, each with slightly and subtly diffeent purposes, it helps to parameterize them. I will agree that you can go overboard with parameterization--such as Xilinx whose FIFO generator has nearly 100 parameters--but I haven't gotten near that point. Another example: Having built several designs, each with a wonderful purpose, I often want to get those same designs to run on multiple pieces of hardware. Even with RTL coding, there are hardware differences. For example, iCE40's don't have distributed RAM and require all RAM reads to be registered--unlike Xilinx. To be able to use the same design across both iCE40s as well as Xilinx chips, therefore, I need to have subtle changes between the two designs. Another example: Some hardware has more logic, other chips have less. Building a CPU that will run on both a Spartan 6LX4 as well as an Artix-7 200T requires either that the CPU be limited by the extremely few resources of the Spartan 6, or be parameterized so that it can support both the Spartan 6 (no caches, no pipelining) as well as an Artix-7 where I have lots of logic to spare. Being able to adjust a design for the hardware space you have available is a productive use of parameters. There is a challenge when using parameters, however: Verification. If you have 20 boolean parameters, that roughly means you need a million test benches to check all combinations of them to know they work. So ... it's a trade off. Still, I find them quite valuable. Dan
  24. @JColvin, Thank you! Dan
  25. @macethorpe, Exactly! Although I wouldn't reinstall Vivado, personally. I think you'd be spinning your wheels to do so. I've been disappointed before to notice that some menu options become available depending upon what mode of the design process you are in. You might want to check both pre and post implementation. One correction to my comment above--I see all the options before ever running synthesis, not after. I got there by right-clicking "Generate Bitstream" and then working through the options presented there. Incidentally, it's really easy to convert a .bit file to a .bin file--you just through away the header of the .bit file. I think it's about 36 bytes if I recall properly. It's usually pretty obvious upon inspection. Dan