All Activity

This stream auto-updates     

  1. Past hour
  2. Hi, you may have some luck with this. It may reach 30 MBit/s, which is the limit for one channel of the FTDI chip. It uses the MPSSE mode instead of UART to overcome the abovementioned UART limits using a dedicated clock signal. busbridge3 link I've used it for real-time data acquisition, in one case using 900+ kSPS for dual-channel 12 bit ADC data (which is about 24 MBit/s with some other overhead on the interface). However, writing "proper" code to shuffle the data from FIFO to PC is not completely trivial. One solution goes like this: - assuming the "busbridge3" interface, 32-bit address/data bus on the FPGA side - design a FIFO that collects the real time data - put a read-sensitive status "register" on the bus e.g. 0x80000000 that queries the fill level of the FIFO. On a read event, the same value is copied to a hidden register "A" - put a "FIFO pop" register on the bus e.g. 0x80000001 with this function: - - if A is non-zero, decrease A and pop a value from the FIFO to the bus - - otherwise, keep A at zero and return dummy data Your software then does this: - queue a single-word read from 0x80000000 - queue an arbitrary length block read e.g. 100 times from 0x80000001 (with address increment 0 - reading the same address over and over) - fire the USB transaction - from the result, the first value is the number of valid words (from "A") - use as many data values from the readback block and discard the rest (dummy data) If you pack two consecutive 12 bit ADC frames into one 24-bit word, you don't waste capacity on padding (busbridge3 allows 8/16/24/32 bit data width).
  3. I can solved my problem using my verilog module. The FPGA can receive the correct keycode from Keyboard through the PIC. The only concern is that, the PIC perform self-test after a period of idle status. (No key pressed). I understood is as an internal operation of PIC. Anyway, I can confirm my verilog design function, so there is no more question. Thank you for your support !
  4. Today
  5. I have a Zybo Z7 board I want to be able to dynamically load my .bit file after the system has booted. I am using petalinux-v2017.4-final-installer.run. In earlier versions I was able to do: cat file.bit > /dev/xdevcfg to have the bitstream loaded into the FPGA. However, /dev/xdevcfg does not exist for me. Is there something I have to configure when building the project and configuring the kernel to get this device to show up or is there some other way that I should be doing this?
  6. Yesterday
  7. @Juliana, You might want to do the math on this. The PMod MIC3 can produce one sample every 1us (1MHz). This sample will contain 12 bits of valid information, so you will need to transfer to your PC something running at a data rate of nearly 12Mbps. Although the FTDI USB->UART chip is rated for 12Mbaud, I've only ever gotten it as high as 4Mbaud in practice, and I feel more comfortable using it at 1MBaud. (It starts struggling at 4MBaud, and seems to be rock solid at 1MBaud. That and 1MHz divides my favorite FPGA clock rate of 100MHz, while 12MHz doesn't.) Now, let's throw away all but 8-bits per sample, and assume you were running at 1MHz. You'd then need 10MBaud to send all of that data to your computer. (Remember, there are 10 baud intervals per byte.) It's not going to happen. However, there's no reason why you can't either 1) Read out random samples, missing/dropping anything when you can't deal with it, 2) Filter, downsample, and read out the filtered results, or even 3) store data to memory and then flush it to the UART. Hope this helps, Dan P.S. You can find a MIC3 controller here, if you want something that's FPGA proven. There's also a scrolling raster FFT demo here, but that requires the Nexys Video board.
  8. I'm a high school teacher trying to buy the right software for use with Basys3 boards. I know xilinx software can be used. I'm interested in Multisim, so I can use it for analog circuit simulation as well as integrated logic simulation and schematic capture with the Basys3 board. Multisim Education is about $628/seat. They also have site licenses that make sense at about 25+ seats, for a yearly subscription. But on the digilent web site, they also offer this: https://store.digilentinc.com/ni-multisim-student-edition-circuit-design-and-simulation-software which is a $40 student multisim. NI claims these are for students only to purchase and I don't know if that is a limited version. The Digilent web site doesn't mention usage restrictions for installing them in a computer lab used only by students. Does anyone know 1) Is this $40 version a permanent license? 2) Is it OK to purchase by a high school and install in a student-used computer lab? 3) Is it more limited in some way? Thanks for any expertise you might have.
  9. Hello All, Had an Analog Discovery 2 for maybe a year, and I've found it very useful and fun to work with. Last couple of days, I finally noticed the Real Analog course materials and they're great. So I purchased the parts kit (not yet arrived) and began to study. However, I'm now trying to find out the location of the answers to the Homework Assignments. Does anyone know if they exist, and if so, where? Great hardware and educational materials, Roland
  10. xc6lx45

    Board for OpenCL

    Have you considered a graphics card? For getting started, even built-in graphics acceleration can be useful with limitations (e.g. no double precision). You can also run openCL code on a standard PC. Performance won't be as wild as on dedicated hardware but functionality is the same (e.g. as "software rendering" fallback option)
  11. ftrujillo

    Board for OpenCL

    Hi I want to work with OpenCL but I don't know what board is recommended for that. Someone could tell me what is the best digilent board for OpenCL, please? Thanks in advance.
  12. I'm using Basys3 board with pmodMic3, and I would like to write a 'real-time' output data (12 bits for each output according to reference manual) to PC (a txt file maybe?) . Is that possible? Please advice. Thanks in advance! J
  13. Maybe one comment: In the ASIC world, "floorplanning" is an essential design phase, where you slice and dice the predicted silicon area and give each design team their own little box. The blocks are designed and simulated independently, and come together only at a fairly late phase. ASIC differs from FPGA in some major ways: - ASIC IOs have a physical placement e.g. along the pad ring. We don't want to run sensitive signals across the chip, RF may follow some voodoo rules to minimize coupling, etc. In comparison, the FPGA IOs are physically routed en masse to the center of the die (this is more complex for large devices, but the first restrictions I'll run into are logical e.g. which clock is available where, not geometrical). - For ASICs, we need the floorplan to design the power distribution network as an own sub-project (and many a bright-eyed startup has learned electromigration the hard way). - In the ASIC world, we need to worry about wide and fast data paths both regarding power and area - transistors are tiny but metal wires are not. You might have a look at "partial reconfiguration", here the geometry of the layout plays some role.
  14. I had this same problem for 2018.3. This solution works with 2018.3 too.
  15. please help me to go through the operation of ddr in zybo z7 clg400 , meanwhile also explain me the process for operation of audio codec in and out.
  16. Szia @attila, You never fail to amaze me what WaveForms and Analog Discovery can do! Thank you again!
  17. Floorplanning is where you start before designing a new board. Once you've assigned pins and created a PCB your options for meeting timing for a particularly complex, dense, and high clock rate design are limited. Of course you will need to have a reasonably 'close to final version' of your FPGA design to start with so that the tools can select the best pin locations. For a general purpose development board like the one you are using only a few interfaces need to be 'optimized' for speed; and of course the speed grade of the parts on the board have a large impact on limiting the performance of any design. It is not always possible to select an arbitrary clock rate for any application for a particular board and always meet timing. On the other hand it's easy to create a design that doesn't have a chance to operate at a desired clock rate when a better conceived design might. Providing the tools with good guidance in the form of constraints is often the key to achieving a particular performance goal, though don't expect Vivado to turn a poor design into a greate design.
  18. Szia @Andras The UART is not suitable for RGB LED control. You can use a custom Patterns signal and script like this to build the data sequence for WS2812 RGB LEDs. Here you have the project: WS2812.dwf3work The default Analog Discovery configuration allocates 1k samples for each Patterns channel with this you can control 14 LED array (1024/24/3). With the 4th AD device configuration you will have 16k samples and can control 227 LEDs. With Digital Discovery 32k 455 LEDs... The supported frequencies are positive integer divisions of 100MHz base frequency: 100MHz, 50M, 33.3M, ... ~3.226M, 3.125M ...
  19. revathi

    xadc_zynq

    Hi @jpeyron, I have noticed one thing today, If I reduce the frequency , the ADC code gets increases like this. But its really look no meaning how it happens, why there is a indirect proportional to frequency and voltage. It may be due to any anti aliasing filter effects. I don,t know exactly. and still am not clear in Vn offset, I would like you to notice what I realized from the manual UG480. Kindly refer the figure below.
  20. Hi, I'm trying to implement a custom protocol for NeoPixel LED arrays using a script. I need to represent the 0 and 1 data bits as signals of 0.4us high + 0.85us low and 0.8us high + 0.45us low respectively. I implemented the custom protocol with a script already, the data pulse widths look alright and I thought I could use UART and its Protocol.UART.Send(uartMessage,true) method, but I had to realize that I can't disable UART's Start bit so the receiver will misunderstand my data. Is there a way to disable the Start bit in the Digital Protocol? Also, when I was investigating the bus, I used the Logic Analyzer and I found something interesting: when I tried to set the rate to 3.2 MHz it always jumped back to 3.125 MHz. Is this normal? Thanks, Andras
  21. Last week
  22. Hello! @jpeyron Dir "jpeyron" there is a picture in your answer of block design for Zynq ("hello world" generate example). But you wrote, that you used vivado 2018.3. I use vivado 2017.4, when i connect block like in your picture and start "implementation" - all be ok. When start "generate bit stream" - i get error. Could you give more pictures that explain how to edit properties of each blocks of that (Zynq core, AXI interconnect, GPIO, proc reset) ???
  23. Hi, reading between the lines of your post, you're just "stepping up" one level in FPGA design. I don't do long answers but here's my pick on the "important stuff" - Before, take one step back from the timing report and fix asynchronous inputs and outputs (e.g. LEDs and switches). Throw in a bunch of extra registers, or even "false-path" them. The problem (assuming this "beginner mistake") is that the design tries to sample them at the high clock rate. Which creates a near-impossible problem. Don't move further before this is understood, fixed and verified. - speaking of "verified": Read the detailed timing analysis and understand it. It'll take a few working hours to make sense of it but this is where a large part of "serious" design work happens. - Once the obvious problems are fixed, I need to understand what is the so-called "critical path" in the design and improve it. For a feedforward-style design (no feedback loops) this can be systematically done by inserting delay registers. The output is generated e.g. one clock cycle later but the design is able to run at a higher clock so overall performance improves. - Don't worry about floorplanning yet (if ever) - this comes in when the "automatic" intelligence of the tools fails. But, they are very good. - Do not optimize on a P&R result that fails timing catastrophically (as in your example - there are almost 2000 paths that fail). It can lead into a "rabbit's hole" where you optimize non-critical paths (which is usually a bad idea for long-term maintenance) - You may adjust your coding style based on the observations, e.g. throw in extra registers where they will "probably" make sense (even if those paths don't show up in the timing analysis, the extra registers allow the tools to essentially disregard them in optimization to focus on what is important) - There are a few tricks like forcing redundant registers to remain separate. Example, I have a dozen identical blocks that run on a common, fast 32-bit system clock and are critical to timing. Step 1, I sample the clock into a 32-bit register at each block's input to relax timing, and step 2) I declare these register as DONT_TOUCH because the tools would otherwise notice they are logically equivalent and try to use one shared instance. This as an example. - For BRAMs and DSP blocks, check the documentation where extra registers are needed (that get absorbed into the BRAM or DSP using a dedicated hardware register). This is the only way to reach the device's specified memory or DSP performance. - Read the warnings. Many relate to timing, e.g. when the design forces a BRAM or DSP to bypass a hardware register. - Finally, 260 MHz on Artix is already much harder than 130 MHz (very generally speaking). Usually feasible but you need to pay attention to what you're doing and design for it (e.g. a Microblaze with the wrong settings will most likely not make it through timing). - You might also have a look at the options ("strategy") but don't expect any miracles on a bad design. Ooops, this almost qualifies as "long" answer ...
  24. Hi , I am using Arty 7 kit to implement my design. At first I used a clock frequency of 130 MHz, and the timing was "met" . Then I increased the clock the clock frequency to 260 MHz , but the timing constraints were "not met" . Pls, see the attached picture. I read about the issue and I found myself I have to do some floorplanning for my design. How to do floorplanning? What is the first step that I have to do with floorplanning ? Thanks.
  25. revathi

    xadc_zynq

    Hi @jpeyron, I have attached my unipolar xadc output signal. Kindly check it off, the same deformation of voltage is there. Vp sinput signal is 0 to 1V, with offset of 500mV. Vn is from DAC B, default value is 0V. Freq from generator is 10Khz I don't understand why there is only 0.93V max and 0.3V minimum.
  26. revathi

    xadc_zynq

    Hi @jpeyron, The sampling rate is 961.54 Kbps I have attached my SDK code, for bipolar mode. Kindly go through it. Recently i got the waveform with proper shape and smoothing, the only thing is there is some deformation of voltage in output. Fig 3: TEST SIGNAL FOR BIPOLAR MODE _NO OFFSET FIG 4: INPUT SIGNAL (VP) CONNECTION FROM AWG TO FPGA KIT FIG 5 : TEST SIGNAL TO UNIPOLAR MODE 1. Weather configuring the bipolar / unipolar mode in XADC WIZARD is enough? 2. Or I need to make some changes in sdk code 3. My assumption is am doing some error in SDK code I will attach my SDK code , I followed the Adam taylor and this attached link for coding and referred the xsysmon .h lab 3 4. The boxed thing in sdk code was added myself for bipolar and unipolar 5. Is it necessary to give the coding which i have given inside the text box? 6. Is it am doing any error in code ? sdk code.docx
  27. Hi @Carlos Posse, Based on the output from dmesg I can tell you that this is not an issue with drivers or software configurations. It's a hardware issue of some sort. I don't see any issues with your schematic. Have you tried using a different USB cable or connecting the cable to a different port on your PC? What are you loading for L8 and L9? Perhaps try loading shunts (0 ohm resistors) instead of ferrites or inductors and see if that makes any difference. Thanks, Michael
  28. mishu

    DDR3 input clock source

    Hi, I am wondering why is a 100 MHz clock present on the ARTY-S7 to be used for the DDR3 clocking, but not present in the ARTY-7. Why not use only a single 100 MHz clock source for ARTY-S7 as main clocking source? Or this was the idea from the beginning? I suppose in both cases with a SE 100 MHz clock source placed on a MRCC FPGA pin can be used to clock also the FPGA resources and the external DDR3 device. Cheers, Mishu
  29. Dear all, I'm looking to purchase a Labview license for personal/non-commercial use. Apparently, Labview 2014 Home edition should be the thing I need but I have some additional questions. Is 'Labview 2014 Home Edition' based on 'Labview 2014'? I know this question almost answers itself, but nevertheless... NI is currently at Labview 2019, so Labview 2014 is getting old and there are some nice-to-have/upgrades that LV19 offers. If you are a student, the 'Labview student edition' is maybe something to look at because it offers more addons. But, since you can not be a student for the rest of your life, I presume this is a yearly license. Is 'Labview student Edition' a one year license and is 'Labview 2014 Home Edition' a life-long license? Regards
  1. Load more activity