D@n

Members
  • Content Count

    1743
  • Joined

  • Last visited

  • Days Won

    126

D@n last won the day on September 26

D@n had the most liked content!

About D@n

  • Rank
    Prolific Poster

Contact Methods

  • Website URL
    https://github.com/ZipCPU

Profile Information

  • Gender
    Not Telling
  • Interests
    Building a resource efficient CPU, the ZipCPU!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. D@n

    Hello All

    @Miakatt, You mean something like this? Or maybe this one? Dan
  2. D@n

    Hello All

    @Miakatt, Welcome to digital design! You can find a lot of valuable digital design blog posts at ZipCPU.com. Most recently, I've been working on an article discussing how to build an I-cache, so you may see that post there soon enough. I'm also working on a (Verilog+simulation+formal verification) tutorial, although it is far from complete. Feel free to wander over and try it. If you've never heard of formal verification, you can also read about my first experiences with it here. Based upon those (and subsequent) experiences, I've found it quite valuable. As for Physics, I was once taught that ... If it stinks, it's chemistry If it crawls, it's biology, and If it doesn't work, it's physics Cheers, and welcome to the forum! Dan
  3. D@n

    Voice-activited

    @Junior_jessy, I see your question, but while I've done signal, audio, and even voice processing before, I've never done speech recognition. For most of my career that has been in one of those "too hard" categories. This was one of the reasons why I suggested you get your algorithm working off-line first. I'm sorry I can't help much more than that, it's not something I know how to do. (I know more about how to implement a given algorithm within either embedded s/w or an FPGA than how to build a voice recognition algorithm.) That's why I was silent. Dan
  4. @Josef, I've gotten burned too often by trying to do math in public and getting it wrong, so forgive me if I don't comment on what is and isn't possible for a given clock rate. The HyperRAM Pmod I cited above can handle one 16-bit transaction every 10ns once you get it going. Not sure if that gets you what you need or not. I will say that DDR controllers can be quite hard, SDRAM is complex but (eventually) quite doable, and SRAM and HyperRAM are both easier. Dan
  5. @Josef, If you've never built an SDRAM controller, it can be a challenge. The HyperRAM interface is a bit simpler to use, and the performance is just as good if not better. Dan
  6. @Josef, I had roughly the same problem on the Basys3 board: there wasn't enough on-board RAM for a frame buffer. I chose one solution, and have since discovered others who have used two other solutions. My solution was to store the image in flash. Since the flash was only questionably fast enough to drive a 25MHz pixel stream, I compressed the images using both a small number of bits per pixel (expanded with a programmable colormap) and run-length encoding. This worked great for static images and even a business slide show. If you are at all interested in this approach, one of the keys to my success was the VGA simulator. You can find an article discussing both how it works and how to use it here. I know others (at Digilent even!) have created a sprite-based capability. This allows video generation as part of a pipeline that "adds" items to the display as it moves through. In this manner, there were able to build things like pacman without using a frame buffer. A third approach that I've thought of using is to purchase some HyperRAM. One-bit squared is selling a HyperRAM that takes two PMod ports, yet gives you access to higher speed memory than your Basys2 will likely be able to use. (You can slow it down, though, and still use it) The logic to drive a HyperRAM isn't all that complex, and so quite doable. Hope this helps, Dan
  7. D@n

    High speed output on PMOD ports

    @DPA, Have you looked at @zygot's Differential PMod Challenge at all? It may be exactly what you are looking for. Dan
  8. D@n

    Voice-activited

    @Junior_jessy, Ditto @xc6lx45's advice. I've spent decades of my life working on signal processing issues to include voice processing. The basic rules were: 1) get it running off line, and then 2) get it running within whatever special purpose hardware (microcontroller, FPGA, etc) is required. This allows us to debug the algorithm where the debugging was easy (Matlab or Octave), so that we'd only need to debug the hardware implementation. As an added benefit, you could send the same samples through both designs (external software, online hardware) and look for differences, which you could identify then as bugs. Trust me, this would be the fastest way to get your design working in VHDL. If you run to the FPGA too fast, you'll 1) spend hours (days or weeks even!) debugging your algorithm, and 2) you'll cement parts of your algorithm before you know that they are working resulting in more rework time. Now, let's discuss your voice cross-correlation approach: It won't work. Here's why: Voice has pitch. You can think of the "pitch" as a fundamental frequency of which they are many harmonics. Pitch is not a constant, it is a varying function. The same word can be said many different ways, while the pitch just subtly shifts it around. A cross-correlation will force you to match pitch exactly, which it will never do. That's problem one. Problem two: Vocal cadence. You can say the word "Hello" at many different speeds and it will still be the same word. Hence, not only does your comparison need to stretch or shrink in frequency to accommodate pitch, it also needs to stretch or shrink in time to accommodate cadence. That's problem two. Problem three: Your mouth will shape the sound you make based upon the position of your jaw and your tongue (and probably a bit more). This acts upon the voice as a filter in frequency that doesn't scale with pitch. That is, as the pitch goes from deep base to upper treble, the same mouth and tongue shape will filter the sound the same way. (This assumes you could say the same word twice and get the *same* mouth shape.) That's problem three. Problem four: Sounds composed of fundamentals with harmonics tend to do a number on cross-correlation approaches. Specifically, I've gotten a lot of false alarms using cross-correlations which, upon investigation, had nothing to do with what I was trying to correlate for. A flute (or other instrument), for example, might give a strong cross-correlation score if you are not careful. Four problems of this magnitude should be enough to suggest you should try your algorithm in Matlab or Octave (I'd be boneheaded enough to do it in C++ personally) before jumping to the FPGA. Computers today have enough horsepower on them to even do this task in real-time, so that you don't need an FPGA for the task. (FPGA's are still fun, though, and I'd be tempted to implement the result in an FPGA anyway.) Were I you, having never worked with speech before, I'd start out not with the FPGA but rather with a spectral raster of frequency over time. I'm partial to a Hann window, but the 50% overlap (or more) is required and not optional without incurring the wrath of Nyquist. FFT lengths of about 20-50ms are usually good choices for working with voice and seeing what's going on within. Then, when returning to the FPGA, I would simulate *EVERYTHING* before touching your actual hardware. Make recordings while working with Octave, prove your algorithm in Octave on those recordings, then feed those recordings into your simulation to prove that the simulation works. Only at that point would I ever approach hardware. Oh, and ... I'd also formally verify everything too before moving to hardware. Once formally verified, it's easy to make changes to your implementation and then re-verify that they will do what you want. You might need this if you get to the hardware only to find you need to shuffle logic from one clock tick to another because you aren't meeting timing. In that case, re-verifying what you are doing would be quite valuable. Those are just some things to think about, Dan
  9. D@n

    Hello!

    @zygot, Here's some light reading on the topic of the speed of the DDR3 SDRAM on the Arty A7: first, on Xilinx's forums, then here again on Digilent's. (Yes, I was dense, I needed to hear the answer twice before it finally registered.) Dan
  10. Start by downloading the spec for the HC06 bluetooth module you have. A quick google search suggests this might be it. Actually connecting the module looks like you may need to do a bit of soldering. Looks like all you need to solder up are the two serial port wires. Dan
  11. D@n

    Hello!

    @oliviersohn, If you want to start simple, you might wish to try the tutorial I've been working on. I have several lessons yet to write, but you may find the first five valuable. They go over what it takes to make blinky, to make an LED "walk" back and forth, and then what it takes to get the LED to walk back and forth on request. The final lesson (currently) is a serial port lesson. My thought it to discuss how to get information from the FPGA in a next lesson. Dan
  12. D@n

    Hello!

    @oliviersohn, Is it possible to have a faster clock for the design? Yes. However, it can be so painful to do in practice that you won't likely do so. You'll need special circuitry within your design every time you cross from one clock "domain" into another. Singular bits can cross clock domains. Multiword data requires an asynchronous FIFO. This circuitry costs time (two clocks from the new domain) to do. Hence you'll lose two slow clocks going from your faster clock speed to the slower one, and two fast clocks going in the other direction There be dragons here. It's doable, don't get me wrong, but ... there are some very incomprehensible bugs along the way. What speed are you hoping to run at? When I first picked up FPGA's, I was surprised to discover the "posted" speed from the vendor had little to no relationship with the speeds I could actually accomplish. For example, despite the 500MHz+ vendor comment, a 200MHz design is really pushing things. 100MHz tends to be "comfortable". However, you may find that the difference between 100MHz and 82 MHz may not be all that sizable. Dan
  13. D@n

    Hello!

    @oliviersohn, Let me start out by disappointing you: The Arty's memory chips will run faster than the interface will, so follow the interface speed. Xilinx's MIG will then limit your design speed to about 82MHz or so. In each 12ns clock, the memory controller will allow you to read 16*8=128 bits. That's the good throughput number. The bad number is that it will take about 20 clocks from request to response. Yes, I was disappointed by the SDRAM when I first got it working. My OpenArty project includes instructions for how to set up the SDRAM interface if you want to use it from logic (i.e. w/o the microblaze). (I'm still working on fixing the flash controller since Digilent swapped flash chips on the newer Arty's ... but at this point the needed change works in simulation, needs to be tested on actual hardware, and then re-integrated with the SDRAM ... but that'll be working again soon.) You may find this blog post discussing how to perform a convolution on "slow" data (like audio) valuable to your needs. Dan
  14. D@n

    Using Convolution Encoder

    @Ahmed Alfadhel, Convolutional encoders are easy things to build. As in *REALLY* easy things to build. Why not build your own? By using the library, you run the issue of not necessarily understanding how the library is built, how it works, or the connectivity it needs. Worse, you'll never be able to debug it if it goes wrong. On the other hand, if you build your own, you'll be using, what, 10-15 lines of Verilog? How hard could that be to debug? Dan
  15. D@n

    UART communication control with CMOD A7

    I've had mixed success with the Digilent chips above 1Mbaud. I've used 1MBaud, 2MBaud, and even 4MBaud. 4MBaud works on one of my boards, but other boards require me to drop down a bit. I can't tell if this is a manufacturing dependent observation, crystals with different tolerances for example, or if there are actually different FTDI parts on my boards that affect this. I do think you'll find dividing your 100MHz oscillator by 25, 50, or 100 easier than whatever you are doing to create a 1843200 baud stream. Dan