• Content Count

    2164
  • Joined

  • Last visited

  • Days Won

    147

Everything posted by [email protected]

  1. @Alexzxcv, If you'll accept that I've never used either a microblaze or this lwIP TCP echo server project, but have still built my own Nexys Video ethernet project, then let me ask the following: Let's start at the basics. When you plug in the ethernet cable to your board, do the ethernet lights light up? How about when you plug the cable into your switch? Once the link is made, board to switch, both sets of lights should light up. Do they? I know that on my own switches, Gbps is signaled by lighting both lights on each end of the cable, and 100Mbps is signaled by lighting only one. A bad cable is easily signaled by neither lighting up, but there might be other reasons as well--such as never taking the ethernet PHY out of reset. If you unplug your cable from the board and plug a laptop into it, does it work? If so, then we've verified the rest of the hardware works, so let's then look at your design. I usually start my debugging by reading and processing the MDIO register values. Have you looked at what the MDIO registers are for your board? They should give you a link status that you can use. I really wouldn't want to move forward with a message, above, saying the link is down--is it really down? Or saying that it is at 100MBps, when the Nexys network is supposed to support Gbps. Does the design include a "tri-mode" ethernet controller? I know the hardware will support it, but does the design? A good test of your MDIO access is whether or not you can toggle the ethernet LEDs. If you can, you should be able to read from the MDIO circuit. Do you have wireshark up and running? You will want/need it to debug any FPGA network interactions. There's a lot that can go wrong, and wireshark really helps The first protocol to verify when using wireshark is ARP, not TCP. When you send a request to TCP to your board, your computer will first send an ARP packet to the board. Without an ARP response, nothing continues. Does wireshark show this interaction taking place? Let me stop at this point, because from what you've shown above getting this far will likely solve your problem. If I were to continue, I'd double check that ICMP (ping) works next, but realistically if ARP works than your software is probably working. Let me know how far you get here. Dan
  2. @Antonio Fasano, I see this message. I might've responded to some of your others, but I don't normally uses Xilinx's IP, so I usually let the staff handle those sorts of questions. Well, that and although I now have my first Zynq board, it's not really working well (yet). That said, you are also posting to what's really the wrong channel. This is more the PIC channel than the FPGA channel, and you might be more interested in the embedded channel within the FPGA channel. It may be that some good folks aren't noticing it for that reason. If you see someone who might know the answer, feel free to summon them by name too. @jpeyron has been a good helper for some of these topics, so ... maybe he might know what it would take to help you out? I'm not sure, we'll see. Dan
  3. @ZyboB3, Have you figured out the DRC error? You shouldn't be getting *any* DRC errors when building a design, so I might start work there. Dan
  4. The webpack now includes an internal logic analyzer, and has for some time. This thread is actually quite old. That said, I capture blocks of data and send them over the UART interface to help me debug as well. Dan
  5. @zygot, I missed the question at first. You might be missing it now. @Bobby29999 has no intention of building this design. This is just an academic exercise. Notice his question, "what should be the main design points that have to be considered w.r.t. FPGA"? I think by this point we've already given him his answer too: data rate, precision, data flow, etc. Those are several of the "design points that have to be considered." He's not asking for the solution, but is rather just looking to answer a homework problem regarding the tradespace. Be careful about where you ask question ... yep! Someone on a Xilinx forum asked what the easiest way was to dynamically control (either a PLL or an ADC) via its AXI port. One prolific poster replied that the "easiest way" would be to use a MicroBlaze CPU. Not a small AXI-capable state machine, but a full up CPU. A CPU that would then need flash (ROM), RAM or SDRAM memory, software, chances are a serial port and most definitely an AXI interconnect if not also some internal AXI upsizers/downsizers and an AXI to AXI-lite converter. Yeah. That's the easiest way to control a hardware element with an AXI interface? I mean, maybe if you already have the CPU on board then adding another peripheral isn't all that hard, but if you are doing straight RTL design then an FSM would've been the easier way to meet his requirements. Yeah, be careful about where you ask your questions is definitely good advice. Dan
  6. @RJ16, ... and how does that sampling rate compare to your system clock rate? That is, the clock rate that's running all of this logic? 10MHz seems kind of low. I think we can both agree at this point that the test you are providing this filter with is leaving you confused. Let me suggest a different test. Why not run an impulse into the system? The result should look *exactly* like the filter coefficients you fed it with. Any deviation should point you directly at what's going right or wrong with the filter. Debugging from there might involve only setting one or two coefficients and leaving the rest at zero, and then applying an impulse to the filter again. Practically, the two biggest problems most folks have with these things is not the filter itself, but rather the data display, the handshaking, and figuring out how to simulate things properly. The output of your addsub suggests a display problem. The 48kHz, and now 10MHz clock rate, suggests a handshaking problem. Not being able to dig into the system to see what's happening and how to fix it is another issue. Other filters don't have that problem. Dan
  7. @RJ16, One other question ... on your filter design page, why is the sample rate set at 48kHz, if you are going to be placing a 1MHz signal into the design? I would note that you've ignored the TREADY signals, so any filter you might've built that depends upon a slower sample rate of the input could clearly be expected to produce junk at the output. The rule is, if ever TVALID && !TREADY, then TVALID must stay high and TDATA cannot be allowed to change. Filters that run at the full FPGA's rate, say at about 100MHz or above. can be expected to use one multiply per filter coefficient. If you go that road, you'll also need to double check that your FPGA has enough multiply elements to support such a filter. Dan
  8. @RJ16, The addsub signal is still displayed wrong. It might help to understand this signal to understand what sort of waveform should be on the output. Dan
  9. @RJ16, My guess is that you are plotting the result as unsigned data when it should be signed two's complement. It's a common mistake, and the signature of it in your plot is quite distinctive. Dan
  10. @Bobby29999, Ahm, no, that's not a "physical reality". Pseudorandom was an understood input to the algorithm, if you had wanted true randomness that would've been a topic in and of itself. By "physical reality" I'm referencing something physical--something that you can see, touch, and either measure or interact with in the real world--not in the computer algorithm world. The "physical reality" should define your data rate requirement and the number of bits you need both in terms of inputs and outputs. The "physical reality" would also explain why you are using an FPGA in the first place. Is the equivalent software function too slow, for example? Is this part of a larger data processing or simulation algorithm? Is it part of a data co-processing application, where you are processing externally generated files absent of any real time requirement? All of this you haven't shared. So be it, your call. You also haven't mentioned how you intend to get data into and out of this routine. You might find that to be as challenging as the algorithm itself--especially if you are this new to FPGA processing. Dan
  11. @Bobby29999, It depends on your algorithm. Which algorithm you choose depends upon your timing requirements, and the bit-level precision you require. Usually some physical reality is driving these things, but you haven't (yet) shared any such reality with us. Perhaps this is a homework problem (that's okay too ... I'm not writing your solution for you). But the "right" answer is really dependent upon these external realities. Of course! Otherwise what input are you going to use for your mapper? Or have I misinterpreted your problem statement? Dan
  12. @zygot, The table method actually works quite well across a wide range of numbers--if you start with a floating point representation of 1.xxxxx * 2^e. The xxxxx makes a great ttable lookup index that then works across a wide range of values. For better precision, use a linear table. For better precision, use a quadratic. For better ... did you notice I was already walking down the Taylor series here? The cool part is that you can easily bound the error in the lookup--provided the number starts with the right form. @Bobby29999, A bit count of some number of random binary bits will be approximately Gaussian by the binomial distribution thing. One of my mentors suggested 12 bits was a good choice. Be aware of scale issues, so that you can accurately match the mean and standard distribution you are attempting to achieve. An even simpler solution, for 16-bits (or less) of pseudorandomness would be a lookup table: place the inverse transform itself into the lookup table, and just use your uniform pseudorandom value as the index into such a table. This would spare you the pain of trying to calculate a logarithm--you could just go straight to the solution in one clock cycle. Using this approach, though, you won't hit every value of 16.16 with an appropriate probability. You'll hit a subset of those values with a uniform probability, only the subset won't be evenly distributed across its range. How big your table would need to be is a measure of both your requirements and how much hardware you can afford. Again, linearly interpolating the table might give you more capability--but I wouldn't jump for it unless the problem required it. Now, coming back to your bit width, 64-bit math is overkill unless you are trying to do something like count the number of atoms in the known universe, or measure the circumference of the known universe to within one hydrogen atom. Let's be honest and real here, what bit-width do you really need? Be aware in your answer that the more width you want, and the faster you want it, the more you will pay for it. Pay? Yes. Your hardware will only support so much logic, and I'm guessing you want to do other things with it as well. At some level, you'd need to build your own ASIC, but at another level you could do the problem in a cheap PIC microcontroller. Somewhere in between your PC would do a better job than the FPGA. Your FPGA is likely to be faster than the PC, but you will also struggle to get that 64-bit double-precision floating point math on the FPGA. It's doable, but you'll pay for it. Are you sure that's what you want? Dan
  13. @Bobby29999, Yea, I saw your post on Xilinx's forums as well. Some of your question depends upon your timing requirements. How often do you need a new random number? That might determine whether you use a state machine or a deep pipeline. You'll also need a nearly uniform psudorandom number to start your processing off. That's the easy part. If you need anything more complicated, AES isn't that hard to write in Verilog. I think there's even a core on OpenCores that handles AES, so that should at least get you started with some nice randomness. For calculating the logarithm, I might recommend using either a lookup table or a lookup table with a linear interpolator between points. I've written about linear interpolators before. I thought about writing out how to build one that used a table for lookup, but the design ended up being so simple there wasn't much to it. Probably not anything worth blogging about. If you wanted, you could count the number of leading zeros in a prior step and calculate an exponent--that would help the log table be more exact, but I'm not sure what you'd then do with the exponent in the next step. Your biggest challenge will probably be the fixed point representation you will be forced to work within, so that's probably something you want to work from the beginning. What are your *requirements*. Those will likely be determined by your ultimate ADC (if using one) or floating point representation (if you are just building a co-processor). The ultimate bit width you need and scale factor (if any), together with the speed that you need to produce these results, will then drive the rest of your design choices. Don't forget to be aware of overflow along the way. Once you have a stream of results, you might find generating a histogram and checking it to be a valuable part of knowing if you've done your job correctly. Good luck! Dan P.S. If your goal is just to generate Gaussians, and you only need a small bit width, there are much easier ways to do it
  14. @tikitiki, I seem to still be quite confused. What type of DRAM chip have you made, and how is it now connected to the Nexys DDR board? Dan
  15. @tikitiki, Is your goal to write a DRAM device controller? Or ... something more esoteric? If your goal is to write a DRAM device controller, then my advice would be to start with the DDR2 SDRAM specification. There's a lot of logic involved before you can bring a controller up to speed to the point where you can interact with it on a "square wave" level. I know of open source examples of SDRAM (not DDR) drivers, and DDR3 SDRAM drivers, but ... I'm not familiar with any DDR2 SDRAM drivers to point you towards. Dan
  16. @tikitiki, I was going to say "Yes" until you said "DRAM measurement". Can you explain any more of what you are looking for? Dan
  17. D@n

    Verilog

    You will then need a synchronization character--something that isn't any of your 16-bits, that says that the 16-bits will follow starting now. You might also wish to send only "printable" characters as well. If you choose printable characters only, the interaction will be much easier to debug. Dan
  18. D@n

    Verilog

    How will you know which byte is the first one? How will you transmit non-printable characters? What software will you use to communicate this information with your FPGA? Dan
  19. D@n

    Verilog

    How would that help? Dan
  20. D@n

    Verilog

    Ahh, okay, now I think I get it. How to send 16-bits over an 8-bit channel ... You'll need some kind of synchronization, and that'll force you to send 24-bits instead of 16. I discussed something similar on my blog, here. In that case, I was trying to send 32-bit value over an 8-bit channel. I used values I didn't recognize as synchronization (start/end of word) values, non hex values as command values, and then that left my hexadecimal values to send four bits at a time. It wasn't all that pretty, but it worked. In another implementation I have, I use 1'bit for synchronization of a 7-bit stream. I think I used printable ASCII '0'-'9', 'A'-'Z', and 'a'-'z' plus '%' and '@'. Anything not in the set was a synchronization byte to tell me when to start my mapping. Hence, if you sent a new line followed by '000000', the code would generate a 36-bit word from it, but if the word was interrupted by a newline, space, or something else it wasn't expecting then it would throw the whole away and start over. If you'd like to see an example, you can find the character decoding here, and the character to word mapping here. Dan
  21. D@n

    Verilog

    That's enough more information that it changes the question completely, but not enough information (yet) to know where I might start helping you. Dan
  22. D@n

    Verilog

    @Marcio, Rule number one: Hardware design isn't programming. Hardware designers generate components and designs. Software designers build functions, objects, and programs. In a similar manner, you will not be using a compiler when working with hardware. FPGA design requires synthesis and implementation, not compiling. Now that we have that out of the way, feel free to take a look at my Verilog tutorial. Perhaps it might answer some of your questions. Then feel free to come back when you have an example hardware design that does (or doesn't) work--something you wish to discuss. and I might be able to help you further. Dan
  23. @Vishnuk, Looks like you got roughly the same answer from Xilinx's forums as well. Dan
  24. @jliv, The board manufacturer should provide you with a master constraints file that identifies every I/O pin connected to your FPGA and the associated voltage standard for that pin. You'll primarily only edit this two ways: 1) You'll comment out those pins that you aren't using, and 2) You'll rename the pins that you are to something more reasonable. For example, ,PMODA[2] doesn't really make that much sense, but o_uart_tx might be much easier to read and comprehend. What you don't want to do is add your own lines describing I/O's that are connected to pins the board doesn't have, or I/O standards that are incompatible with the rest of the bank. You may later need to add lines to the file describing timing false paths or some such. That will come to play when you need to start crossing clock domains. Dan
  25. @Vishnuk, The IP cores Xilinx provides are not intended to be "modified for use". They're intended to be used as is. The UART 16550 core is one such core. It is not intended to be modified, but to be used as is. Some cores offer parameterization, some can be configured, but "modified" is typically beyond the scope. This leaves you with a couple of options: If you want to build a serial port echo using the UART 16550 core, you need to connect an AXI master to it. Most of the instructional material will discuss creating a MicroBlaze CPU and then creating a bus for it to get instructions and data from. You could then connect the UART16550 to that bus and write a program to query the UART16550 in a loop and write the results to the transmit half of the core. This would create a serial port "echo". It's also a horrendously complex place to start from simply from the number of things that could go wrong when setting it up. If you are a beginning FPGA designer wanting to learn how to write VHDL, this is not where I would recommend you start. You could also build your own AXI master to communicate with the UART16550 core--something less than a MicroBlaze, perhaps even much less than a CPU. Xilinx also offers some other UART cores as well. Perhaps one has a simpler interface that would be easier to interact with from VHDL. Perhaps not. You might be able to find something better by searching through their offerings. A serial port is actually a fairly simple core to build. If you are a beginner wanting to learn VHDL, this would be where I would advise you begin. The other options above aren't really that great for teaching someone logic design--they're better options for someone who already knows and understands logic design. I would recommend you build your own serial port using the 8N1 serial protocol (8 data bits, no parity, 1 stop bit) since that's the most common serial port line coding. The baud rate you choose is somewhat up to you. 115,200 is common, but I'm aware of at least one terminal emulator only goes up as high as 9.6k. If you go for building your own, then it should be very easy to connect the receiver directly to the transmitter to build an echo device. Indeed, you can see how I did it myself using Verilog here. That file is a bit curious simply because it was first written to support a serial port that would operate on any protocol (5,6,7,or 8 data bits, mark, space, odd, even, or no parity, 1 or 2 stop bits, arbitrary baud rates, etc) and could switch protocols at run time. The resulting core turned out to be too complex for some of the FPGAs I've since needed to work with, and so I had to come back and write "lite" versions of my transmitter and receiver that only supported 8N1 and a constant baud rate. A macro within the file controls whether or not the full or lite versions of the underlying serial port implementations are used. My point? Starting with the 8N1 protocol really isn't such a bad thing to do. The one other thing I'd point out is that in my own Verilog beginner's tutorial, one of the tutorial's exercises is that of building a serial port. It's a good task to learn from and early on. Indeed, it's a good task to learn from long before you start playing with anything "AXI" or "AXI" related. If you are struggling to get your serial port to work in hardware, you can use blinky to your advantage to find out why or where it's failing too. Once you get your own serial port working, debugging gets easier since you can then send debugging data over the serial port--but you sort of have to get past this task first in order to get there. Hence the reason why I like teaching how to build a serial port early on. Dan