Leaderboard


Popular Content

Showing content with the highest reputation since 11/02/18 in all areas

  1. 2 points
    Hi @attila Thank you again for all the support you've provided me for the past weeks. I am now capable of receiving more than 409 characters using the Wrapper I created base from your example. It uses the Record acquisition mode and I set the buffer size to 3 million for now. I'll increase it when the need arises. I used 1 UART controller and branched out its Tx pin to 2 DIO pins of the AD2 (DIO #0 & 1). I transmitted 500 characters: (If Record mode is not the acquisition mode, the received result will be blank) For DIO # 0, it received: with a length of: For DIO #1, it received: with a length of: I could not have done it without your guidance, thank you again and more power to you and Digilent Best regards, Lesiastas
  2. 2 points
    Hi @Blake, I was struggling with the same problem. In Adam's project is mistake which result is an FMC-HDMI module is not recognizable by other devices. The reason for that is not sending EDID at all. The cause of this situation is wrong initialized EDID map. In Adams example EDID is initialized by: but the correct way is: the body of iic_write2 is from LK example: By the way, in LucasKandle example initialization is done in same way as in Adam's example so is the reason why it not worked in your case. I hope it will helps. If you want I will post my working code for a ZedBoard with FMC-HDMI when I clean it because at the moment is kind of messy.
  3. 2 points
    kwilber

    Pmod DA3 clocking

    It seems to me the AXI Quad SPI block is sending address + data. Looking at the .xci file again, I see C_SPI_MEM_ADDR_BITS set to 24 bits. So 24 bits of address and 16 bits of data would yield 40 bits.
  4. 2 points
    Hi @neocsc, Here is a verified Nexys Video HDMI project updated from Vivado 2016.4 to Vivado 2017.4. You should be able to find the updated project in the proj folder . Here is a GitHub project done in HDL using the clocking wizard, DVI2RGB and RGB2DVI IP Cores for another FPGA. Here is a unverified Nexys Video Vivado 2017.4 HDMI pass through project made from the linked Github project. In the next few days I should have the bandwidth to verify this project. thank you, Jon
  5. 2 points
    The warning you pasted is benign and simply means there are no ILAs present in your design. The real issue could be your clock. You should review the datasheet for the dvi2rgb.Table 1 in section 5 specifies RefClk is supposed to be 200Mhz. Also, your constraint should follow the recommendation in section 6.1 for a 720p design. Finally, @elodg gives some great troubleshooting information in this thread.
  6. 2 points
    Hi @akhilahmed, In the mentioned video tutorial, the leds are controlled using "xgpio.h" library but the application is standalone. If you want to use a linux based application you have to use linux drivers for controlling. In the current Petalinux build, which is used in SDSoC platform, UIO driver is the best approach. Steps: 1. Vivado project generation: - Extract .dsa archive from /path_to_sdsoc_platform/zybo_z7_20/hw/zybo_z7_20.dsa - Launch Vivado - In Tcl Console: cd /path_to_extracted_dsa/prj - In Tcl Console: source rebuild.tcl - In this point you should have the vivado project which is the hardware component of SDSoC platform. Open Block Design. Change to Address Editor Tab. Here you will find the address for axi_gpio_led IP: 0x4122_0000 2. Petalinux UIO driver: - Launch SDx - Import zybo-z7-20 SDSoC platform - Create a new SDx linux based project using a sample application (e.g. array_zero_copy) - Build the project - Copy the files from /Dubug/sd_card to SD card - Plug the SD card in Zybo Z7. Make sure that the JP5 is set in SD position. Turn on the baord - Use your favorite serial terminal to interact with the board (115200, 8 data bits, 2 stop bits, none parity) - cd to /sys/class/uio - if you run ls you will get something like: uio0 uio1 uio2 uio3 uio4 uio5 - Now you have to iterate through all these directories and to search for the above mentioned axi_gpio_led address: 0x4122_0000 - For example: cat uio0/maps/map0/addr will output: 0x41220000, which means that the axi_gpio_led can be accessed using linux uio driver through uio0 device. - Code: #include <stdio.h> #include <stdlib.h> #include <sys/ioctl.h> #include <sys/mman.h> #include <stdint.h> #include <unistd.h> #include <fcntl.h> #define UIO_MEM_SIZE 65536 #define UIO_LED_PATH "/dev/uio0" void UioWrite32(uint8_t *uioMem, unsigned int offset, uint32_t data) { *((uint32_t*) (uioMem+offset)) = data; } uint32_t UioRead32(uint8_t *uioMem, unsigned int offset) { return *((uint32_t*) (uioMem+offset)); } void led_count_down(uint8_t *ledMem) { uint8_t count = 0xF; uint8_t index = 0; for (index = 0; index < 5; index++) { UioWrite32(ledMem, 0, count); count = count >> 1; sleep(1); } } int main() { // Set Leds as output int led_fd = open(UIO_LED_PATH, O_RDWR); uint8_t *ledMem = (uint8_t *) mmap( 0, UIO_MEM_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, led_fd, (off_t)0); UioWrite32(ledMem, 4, 0x0); // Set all leds as output while(1) { // Start led count-down led_count_down(ledMem); } return 0; } - Build the project and copy the content of Debug/sd_card on SD sd_card - Power on the board and connect to it using a serial terminal - run the following commands: mount mmcblk0p1 /mnt cd /mnt ./project_name.elf - Result: A countdown should be displayed on leds.
  7. 2 points
    JColvin

    Arty A7 flash chip

    Hi @D@n, I believe the new part that is used in the Arty A7 boards (and other A7 boards) is now a Spansion S25FL128SAGMF100; based on old schematics, I believe this was added in Rev D of the Arty A7 (dated August 2017), though I do not know when that particular Rev was then released (or if it even was released) to the public. I confirmed that the Arty S7 also uses this part and I wouldn't be surprised if most of our other Artix 7 based boards use it now as well. I've requested that the chip name and images are updated in any appropriate tutorials and requested that the pdf version of the reference manual (updated wiki) is updated as well. Thanks, JColvin
  8. 1 point
    xc6lx45

    I bricked my CMOD-A7

    Thinking aloud: Is it even possible to "brick" an Artix from Flash? On Zynq it is if the FSBL breaks JTAG, and the solution to the problem without boot mode jumpers is to short one of the flash pins to GND via a paper-clip at power-up. But on Artix? Can't remember having seen such a thing. Through EFUSE, yes, but that's a different story. If you like, you can try this if it's a 35T (use ADC capture at 700 k, it stresses the JTAG port to capacity). For example, it might give an FTDI error. Or if it works, you know that JTAG is OK.
  9. 1 point
    Hi @P. Fiery Thank you for the observations.
  10. 1 point
    D@n

    Verilog

    @Ahmed Alfadhel, Perhaps the most complete tutorial out there is asic-world's tutorial. You might also find it the most vacuous, since although it tells you all the details of the language it doesn't really give you the practice or the tools to move forward from there. There's also a litexsoc (IIRC) by enjoy-digital that I've heard about, but never looked into An alternative might be my own tutorial. Admittedly, it's only a beginner's tutorial. It'll only get you from blinky to a serial port with an attached FIFO. That said, it does go over a lot of FPGA Verilog design practice and principles. It also integrates learning how to use a simulator, in this case Verilator, and a formal verification tool, such as SymbiYosys, into your design process so that you can start learning how to build designs that work the first time they meet hardware. I'm also in the process of working to prepare an intermediate tutorial. For now, if you are interested, you'd need to find most of the information that would be in such a tutorial on my blog. (It's not all there ... yet, although there are articles on how to create AXI peripherals ..) Feel free to check it out. Let me know what you think, Dan
  11. 1 point
    Hi @P. Fiery The Views can't be opened/closed from Script. The FFT.Window refers to data windowing. You could have 2 Scopes opened, one with and the other one without fft, and control them from Script as Scope1 and Scope2.
  12. 1 point
    Hey Paolo, I'm glad you found my videos helpful! I've been working on other projects, but if you have any other ideas for videos that you would find helpful let me know. Kaitlyn
  13. 1 point
    Hi @cfatt7 Yes, you can use the FDwfAnalogOutConfigure(..., -1, ...) to start channels synchronized. You can also use the FDwfAnalogOutMasterSet to specify the master channel, then starting master channel will also start the slave channels. This is important in case you are using external triggering or cross-triggering with other instruments. Specifying a finite run length is useful to keep different frequencies phase aligned, using the minimum frequency or greatest common divisor. Like 1kHz might be generate as 0.9999999kHz and 2kHz as 2.000000001kHz, which could shift slowly over time. In this case use 1ms (1/1kHz) run time. FDwfAnalogOutRunSet(..., ..., 1.0/min_freq); FDwfAnalogOutRepeatSet(..., ..., 0); See the WF SDK/ samples/ py/ AnalogOut_Sync.py examples
  14. 1 point
    Hi @m72 After adding the Order option in Logic Analyzer (splitting the Input selection in two) I have forgotten to update the Protocol/Logic Analyzer to set the Order option automatically. Thank you for the observation, it is fixed for the next release.
  15. 1 point
    zygot

    Using tera term for two pmods

    Well I think that this is better stated as saying that most serial terminal applications can only connect to one COM port at a time. It is possible to mave multiple UARTs in your FPGA design and connect to multiple serial terminal applications. I like Putty myself, but there are other options. Another possibility is to look around in the Digilent Project Vault and see at least 3 project with source code that might accomplish what you want to do. If you instantiate your own UART you can access any number of internal registers or memory.
  16. 1 point
    Hi @m72 The pulse preview is not correct. I will look into this. Thank you for the observations. You could use a custom bus or signals to easily create/modify such patterns.
  17. 1 point
    jpeyron

    GPS Pmod

    Hi @cepwin, I'm glad you we able to get to the bottom of the issue. Thank you for sharing what happened. cheers, Jon
  18. 1 point
    D@n

    Custom IP

    @PoojaN, You're not the first person who has asked this. If you just want to blink an LED, then I'd recommend a different approach that avoids all the pain with AXI in the first place. (You don't need AXI ...) If you want to start interacting with AXI cores, then you'll need to learn AXI. Sadly, this isn't as simple as it sounds. Xilinx picked the AXI bus to connect all their components with. This may have something to do with their ARM integration, since if I understand correctly AXI is an ARM creation AXI is not a simple bus to work with. Unlike Wishbone, it has five channels associated with it each of which can stall. These are the read address channel, the write address channel, the write data channel, the read response channel and the write response channel. One bus failure, and your device will lock up. In my experience, using an ARM+FPGA chip, lockups could only be fixed by cycling the power leaving you ever wondering what had caused the problem. Part of the problem is that the AXI standard has no way of recovering following a dropped response other than a total system reset. As I've implemented Wishbone, you can just adjust one wire (the cycle line--but that's another story) and start over. You can even use a timeout to clear the bus if a peripheral has not responded within an expected period of time. Not so with AXI. AXI is so difficult to work with that not even Xilinx could get it right. (See the links above) When I first discovered these bugs, I wondered that no one had found them before. For example, two writes in a row would lose a response and lock up the bus if ever there was the slightest amount of backpressure on the return channel. (Something Wishbone doesn't have to deal with, since there's no way to stall a Wishbone acknowledgement) It would seem as though very few individuals ever simulated their cores with backpressure (i.e. either BREADY or RREADY signals low), and so they never noticed these bugs. Similarly, some configurations of the interconnect might trigger the bugs while others wouldn't. Imagine adjusting the glue that holds your design together only to find your design starts failing. What would you blame? The interconnect, right? When in fact it was their demonstration core logic at fault that everyone was copying. I've now fielded several questions in the last several months alone on Xilinx's forums from users who've struggled with these bugs. If you do searches, you'll discover that folks have been struggling with these sorts of problems ever since Xilinx started using AXI. In one recent post, a software engineer posted that his FPGA engineer had left, leaving them with a "working" design. He then adjusted the software within the design and the whole design now froze any time he tried to write to their special IP core twice in succession. I'm hoping Xilinx will fix these bugs (soon). I haven't checked their latest release since reporting them, but I do expect them to fix the bugs in the near future. It's not just Xilinx either. I'm currently verifying the (ASIC) soft core of a major (unnamed) vendor. Much to my surprise, despite a team of highly paid professional engineers working to produce this amazingly complex core , and despite the fact that they created a simplified subset of the AXI interface standard to work with ... they still didn't get the AXI interface right. Realizing how difficult this was, I tried to simplify the task by creating a couple of cores. One showing how to build a bug-free AXI-lite slave (link above), another showing how to build a bug-free AXI slave (link above again). I also shared an AXI bridge implementation that, if you place your core downstream of it, you'd be guaranteed to meet the AXI protocol--even if it slowed you down a touch. I also shared the code for verifying that an AXI-lite component works--you are free to try it out yourself to know if your core still works after changing it. If you like using Wishbone, I've posted an AXI-lite to Wishbone bridge, or even a Wishbone to AXI bridge in case you want to access your DRAM memory. I also think you'll find that all of these cores, save perhaps the bus fault isolator core, will have better performance than Xilinx's logic ever had. Whether or not you use these options (or give up on AXI as I've tried to do) ... well, that's up to you. Forget what the sales brochures tell you, we aren't playing with legos here. There's more required to hook things together then just plugging them into each other--especially if you want something that works reliably when you are done. Just want something simple? Learn Verilog or VHDL. At least then you'll be the one responsible for your own bugs. Dan
  19. 1 point
    yes, for an application with basic requirements, like receiver gain control this will probably work just fine (it's equivalent to an analog envelope detector). Now it needs a fairly high bandwidth margin between the modulation and the carrier, and that may make it problematic in more sophisticated DSP applications (say "polar" signal processing when I try to reconstruct the signal from the envelope) where the tolerable noise level is orders of magnitude lower.
  20. 1 point
    Hi @Ahmed Alfadhel I had the C code handy because I have been working on an atan2(y,x) implementation for FPGAs, and had been testing ideas. I left it in C because I don't really know your requirements, but I wanted to give you a working algorithm, complete with proof that it does work, and so you can tinker with it, see how it works, and make use of it. Oh, and I must admit that it was also because I am also lazy 😀 But seriously: - I don't know if you use VHDL or Verilog, or some HLS tool - I don't know if your inputs are 4 bits or 40 bits long, - I don''t know if you need the answer to be within 10% or 0.0001% - I don't know if it has to run at 40Mhz or 400Mhz - I don't know if you have 1000s of cycles to process each sample, or just one. - I don't even know if you need the algorithm at all! But it has been written to be trivially converted to any HDL as it only uses bit shifts and addition/subtraction. But maybe more importantly you can then use it during any subsequent debugging to verify that you correctly implemented it. For an example of how trivial it is to convert to HDL: if(x > 0) { x += -ty/8; y += tx/8;} else { x += ty/8; y += -tx/8;} could be implemented as IF x(x'high) = '0' THEN x := x - resize(y(y'high downto 3), y'length); y := y + resize(x(x'high downto 3), x'length); ELSE x := x + resize(y(y'high downto 3), y'length); y := y - resize(x(x'high downto 3), x'length); END IF My suggestion is that should you choose to use it, compile the C program, making the main() function a sort of test bench, and then work out exactly what you need to implement in your HDL., You will then spend very little time writing, debugging and improving the HDL because you will have a very clear idea of what you are implementing.
  21. 1 point
    attila

    Getting Input Phase Programmatically

    Hi @jamesbraza I constantly see the prefix `rg` in your programs. What is the meaning of `rg` prefix in all array namings? This are so called Hungarian notations originating from physics, to help identifying variable kinds like: rg Array, sz String, i Index, c Count Why does the gain term = V_C1 / V_C#? I would think it's the inverse... gain = output / input = V_C2 / V_C1 This is how the function returns it. You can convert it using 1.0/gain Does the formula you listed, M = gain2 - 1.0, come from a simplification of M = (V_C1 - V_C2) / (V_C2 - 0)? Yes. Also, please see the attached image. It's of input phase. Note sometimes the points are flipped about 360°. My final question is, do you know why this might be happening? The phase should be normalized to +/-PI. The next software version will correct this, but you can correct it in you script/application like this: if phase2.value > math.pi : phase2.value -= 2.0*math.pi if phase2.value < -math.pi : phase2.value += 2.0*math.pi Thank you for the observation.
  22. 1 point
    Hi, I just have opened a new terminal and launch minicom through the new terminal which works the same way as SDK terminal but I have to close the SDK terminal before connecting to minicom. Thanks @D@n and @jpeyron
  23. 1 point
    Nothing to worry about if only one is up at a time. It would mean that the frequencies of adjacent oscillators affect each other if they are running at the same time ("injection pulling", to the point that they agree on a common frequency ("locking"). Consider the oscillator as an amplifier with a feedback loop. The feedback path plus phase shift lead to a fairly narrow frequency response around the oscillation frequency or harmonically related frequencies). Weird things can happen with the gain - while it is unity in average steady-state operation, the circuit can get highly sensitive to external interference that is (near)-correlated with the oscillator's own signal. Wikipedia: Perhaps the first to document these effects was Christiaan Huygens, the inventor of the pendulum clock, who was surprised to note that two pendulum clocks which normally would keep slightly different time nonetheless became perfectly synchronized when hung from a common beam
  24. 1 point
    jpeyron

    Pmod da3 reconstruction filter

    Hi @lwew96, We have not used a reconstruction filter. I did find a paper that discusses a reconstruction filter with the AD5541 here. Hopefully one of the more experienced community members will have some input for you as well. best regards, Jon
  25. 1 point
    D@n

    Noisy Output from FIR Compiler

    @Ahmed Alfadhel, You have a couple of options available to you: It's not clear, from your pictures above, whether or not the -40dB stop band was achieved. Some amount of noise is to be expected due to truncation errors, etc. Without seeing an estimated PSD, I can't tell. It may be that it's doing exactly what you required of it. -40dB is only so good. With more taps, you should be able to go deeper. How deep depends upon your requirements. How good do you want the signal to look? You may also need to provide more bits to both your signal and coefficient values in order to do better. You did prescale your coefficients so that, when rounded to integers, the taps were useful, right? Also, be aware, the filter will be specified for full scale. You'll want to measure it against a full scale input. Anything less will introduce additional truncation error. This is one of those reasons why the dynamic range (i.e. number of bits) of the input and output signals are so important. Enjoy! Dan
  26. 1 point
    Hi, For sw part I use Xilinx DMA driver (interface to VDMA IP core) and modified ADI AXI HDMI DRM driver for exposing frame buffer device to GUI sw (e.g. Qt). You can see driver bindings in above attached zyboz7-20.devicetree-1.zip (pl.dtsi). All video memory transfers to FPGA are managed by this two drivers.
  27. 1 point
    jomoengineer

    Howdy from NorCal

    Thanks Jon. And thanks for the links. Cheers, Jon
  28. 1 point
    The example I posted would work for Linux or Mac with "common" tools installed. As to Windows... can't really help much there. git's not part of Python, it's used for managing code; you can achieve the same end result here by downloading the ZIP from https://github.com/bdlow/dlog-utils-portable/archive/master.zip and unzipping to a folder. Virtual environment support is a standard part of Python 3; you can skip that if you like but without virtual environments eventually your Python installation will end up like this: https://xkcd.com/1987/ Ah, of course, in Windows `activate` is a batch script not a shell script: https://www.techcoil.com/blog/how-to-create-a-python-3-virtual-environment-in-windows-10/
  29. 1 point
    Hi, as I may not have time for FPGA work for a while - just started in a fascinating new role related to high-speed digital diaper changing - I decided to post this now. Here's the Github repo (MIT-licensed) The project provides a very fast (compared to UART) interface via the ubiquitous FTDI chip to a Xilinx FPGA via JTAG. Most importantly, it achieves 125 us response time (roundtrip latency), which is e.g. 20..1000x faster than a USB sound card. It also reaches significantly higher throughput than a UART, since it is based on the MPSSE mode of the FTDI chip. Finally, it comes with a built-in bitstream uploader, which may be useful on its own. I implemented only the JTAG state transitions that I need but in principle this can be easily copy-/pasted for custom JTAG interfacing. So what do you get: On the software side an API (C#) that bundles transactions, e.g. scattered reads and writes, executes them in bulk and returns readback data On the RTL side a very basic 32 bit bus-style interface that outputs the write data and accepts readback data, which must be provided in time. See the caveats. In general, a significant increase in complexity over a UART. The performance comes at a price. In other words, if a UART will do the job for you, DO NOT use this project. For more info, please see the repo's readme file. For CMOD A7-35, it should build right out-of-the-box. For smaller FPGAs, comment out the block ram and memory test routines, or reduce the memory size in top.v and Program.cs. I hope this is useful. When I talked to the FTDI guys at Electronica last week I did not get the impression that USB 3.0 will make FT2232H obsolete any time soon for FPGA: They have newer chips and modules but it didn't seem nearly as convenient, e.g. the modules are large and require high density connectors. In FPGA-land, I think USB 2.0 is going to stay... Cheers Markus
  30. 1 point
    attila

    Logic Analyzer Counter Function

    Hi @Lars Lindner You can perform a recording and see the pulses using quick measurements or measurements like this:
  31. 1 point
    jpeyron

    hdmi ip clocking error

    Hi @askhunter, I did a little more searching and found a forum thread here where the customer is having a similar issue. A community member also posted a pass through zynq project that should be useful for your project. best regards, Jon
  32. 1 point
    @longboard, Yeah, that's really confusing isn't it? At issue is the fact that many of these chips are specified in Mega BITS not BYTES. So the 1Gib is mean to refer to a one gigabit memory, which is also a 128 megabyte memory. That's what the parentheses are trying to tell you. Where this becomes a real problem is that I've always learned that a MiB is a reference to a million bytes, 10^6 bytes, rather than a mega byte, or 2^20 bytes. The proper acronyms, IMHO, should be Gb, GB, Mb, and MB rather than GiB or MiB which are entirely misleading. As for the memory, listed as 16 Meg x 8 x 8, that's a reference to 8-banks of 16-mega words or memory, where each word is 8-bits wide. In other words, the memory has 16MB*8 or 128MB of storage. You could alternatively say it had 1Gb of memory, which would be the same thing, but this is often confused with 1GB of memory--hence the desire for the parentheses again. Dan
  33. 1 point
    Hi @Phil_D The gain switch is adjusted automatically based on the selected scope range. At 500mV/div (5Vpk2pk ~0.3mV resolution) or lower the high gain is used with and above this the low gain (50Vpk2pk w ~3mV resolution). In case you specify trigger level out of the screen (5Vpk2pk) or offset higher/lower than +/- 2.5V the low gain will be used for the trigger source channel. This will be noted on the screen with red warning text. The attenuation is a different thing. This option lets you specify the external attenuation or amplification on the signals which enter the scope inputs and the data is scaled accordingly. Like, if you use a 10x scope probe, the scope input will actually get 1/10th of the original signal, but specifying 10x attenuation the signal is scaled to show values on the probe. In this case the 500mV/div (5Vpk2pk) low/high gain limit moves up to 5V/div (50Vpk2pk) and the low gain up to 50V/div If you have an external 100x amplifier on the scope input you can specify 0.01x attenuation. With this you will have 5mV/div (50mVpk2pk ~0.003mV resolution) for high gain.
  34. 1 point
    HI xc6lx45: Well, to my surprise, when I got home and loaded the .BIT file onto the board...it works perfectly. [1:0]sw is changing the frequency the the led is blinking at properly. So this tells me that I don't quite have my testbed code done properly. I tried to attach it into this text but it kept getting reformatted so I've simply attached the actual file. If somebody could look at it and tell me what (if anything) I've done wrong I'd greatly appreciate it. THANKS! NOTE: In the actual module code, above, I had changed the CASE choices to the 0, 1st, 2nd and 3rd flip-flops in order to better see the led changing value on the wave panel. However I've changed the code back to the actual flip-flops I wanted; the 26th, 25th, 24th and 23rd flip-flops. As I said...the board is working perfectly now and the switch setting are appropriately changing the led blinking frequency. It HAS to be something wrong with the TestBench code...or me not using the simulator properly. THANKS MUCH! clock_divider.tb
  35. 1 point
    Hi @Phil_D Try calling to load the workspace and to run script one after the other. subprocess.Popen(['C:/Program Files/Digilent/WaveForms3/WaveForms.exe', 'phase_noise_237.dwf3work']) subprocess.Popen(['C:/Program Files/Digilent/WaveForms3/WaveForms.exe', '-runscript'])
  36. 1 point
    Hi @Jaraqui Peixe, Unfortunately, Digilent does not have the ability to obtain these licenses for you with regards to Xilinx negotiations. I do not doubt that the Spartan 3E Starter Boards you have are as good as new and work as such, but the reality is that last variant of ISE 14.7 that could support the FPGA chips on the Basys 2 and the Spartan 3E (both over 10 years old), was released by Xilinx back in 2013, so active support on these boards is limited as the required software will not install on newer OS's (at least the Windows variants anyway). As @xc6lx45, it is possible to make it work though. What I would probably recommend is looking into the newer 7 series boards, such as the Basys 3 (the most similar to the Basys 2) or if you would want access to more memory than is provided in BRAM, both the Arty A7 and the Nexys A7 have on-board DDR memory. All of these boards work with Microblaze and are supported by the free Vivado WebPACK from Xilinx (which is license-free if that is a factor for you and includes Microblaze). Naturally, there is no guarantee that the Vivado software that supports these Artix 7 FPGA chips will become end-of-life'd, but I can at least say from Digilent's end that I have not heard of this happening in the near future. Thanks, JColvin
  37. 1 point
    Hi, >> We are forced to work in assembly with picoblaze. you might have a look at the ZPU softcore CPU with GCC. The CPU is just a few hundred lines of code but most of its functionality is in software in the crt.o library in RAM. I understand it's quite well tested and has been used in commercial products. Not surprisingly, using an FPGA to implement a processor that then kinda emulates itself in software (aka RISC :) ) is maybe not the most efficient use of silicon - I'm sure it has many strong points but speed is not among them... Unfortunately, the broken-ness of Xilinx' DATA2MEM utility (to update the bitstream with a new .elf file) spoils the fun, at least when I tried in ISE14.7 (segfaults). When it works, the compile/build cycle takes only a second or two. Long-term, porting the processor to a new platform would be straightforward, or even fully transparent if using inferred, device-independent memory. This would also work for a bootloader that is hardcoded into default content in inferred RAM. I might consider this myself as a barebone "hackable" CPU platform strictly for educational purposes.
  38. 1 point
    jpeyron

    Nexys 2 - transistor part number

    Hi @CVu, Glad to hear that replacing the transistor fix the issue. Thank you for sharing what you did. best regards, Jon
  39. 1 point
    You might have a look at Trenz Electronics "Zynqberry". I think they managed to get one of the cameras to work (not sure). What I do remember is that the board has some custom resistor circuitry to additional pins for the required low-speed signaling.
  40. 1 point
    D@n

    Conflicting Voltages in Bank Arty-A7

    @zygot, @Ahmed Alfadhel is not using a Basys3 board, and so this is really a bad example of attaching one question to another post. @Ahmed Alfadhel appears to be using an Artix-A7 board. In that case, the sys_clk is properly constrained, but he may well have some of the DDR3 I/O pins improperly constrained. These are the pins located on Bank 35. I think the problem in this case is that @Ahmed Alfadhel has improperly constrained in DDR DQS pins. For example, ddr3_dqs_[0] should be set to pin N2, not to A6. Compounding the problem is the way these pins are hidden in a "board definition file" rather than in the XDC file, making it likely to have conflicting pin definitions. @Ahmed Alfadhel, If you are following Digilent's instructions, you might want to double check that you have the appropriate board definition file. If you are trying this on your own, using only an XDC file, then you might find these instructions valuable. Also, I would recommend you not attach unrelated issues to old posts. Perhaps the Digilent staff might be kind enough to separate these two issues into separate forum posts--since they really are quite different. For example, the Basys3 board doesn't have the DDR3 memory which is the source of your pin-connection troubles. Dan
  41. 1 point
    are you maybe using a low-speed analog output with 200 ohms series resistor? Check the schematic of the board for a direct output.
  42. 1 point
    Well that's a pretty horrible looking 5 MHz signal coming directly out of an MMCM. It does remind me of the characteristic response of a particular passive component to a pulse, from decades ago when I took my intro electronics course. What do you think? Remind you of anything? I didn't mention the idea of scope probe compensation. It sure doesn't look like something that even a cheapo compensated probe would present for a low frequency signal out of a functioning FPGA pin into a high impedance load. Past that there are a number of usual suspects... but something is fundamentally wrong with your test setup.
  43. 1 point
    Hi @ebattaglia42, What operating system are you currently on? If you are Windows, can you attach a picture of what is shown in the Windows Device Manager and what you see in the WaveForms Device Manager (it should pop up when you initially connect the EE Board). The other thing I would suggest to try would be to use a different USB cable (make sure it's not just for charging only) and/or USB port on your computer as that is another source of error that is easy to check. Thank you, JColvin
  44. 1 point
    You can get the SDK to add a few example projects for any device in the system. Open the system.mss and click on the OS (the default is the standalone but you may have chosen another one when you created your BSP). Scroll down to the uart_x that you run through the PL and click on the demonstration examples. There is a nice variety of demonstrations and you probably want to add them all. The SDK will build these for the uart you selected. This is one nice feature of the SDK. If you chose another OS, such as the RTOS I'm not sure if examples are available. You likely want to use the interrupt driven example as a basis for your design ( depending on how you designed your overall software control). Of course, there are a lot of ways to arrange your communication protocol so I hope that you've spent some time thinking about how it will work. The simpler the better. Understand that the purpose of the example code is to show you the basic requirements to implement a particular interface and not to solve your problems... that is they are there for you to pore over and understand how they work. I can't send you code because your application is unique to you. If your SDK OS has a hardware abstraction layer then you will likely need to find other sources for example code. I rarely need (or want) a full-up OS like Linux for embedded applications. [edit] I should have mentioned that since you have at least two FPGA boards ( and ony you know what else ) you have a system. The basic system definition and design approach should be the first thing to flesh out. This includes inter-board communication; for instance are the boards peer-peer or is there a hierarchy? You can always tweak the system design if the lower level considerations demand it once you start fleshing out the actual implementation. If you haven't given any thought to the system interactions and structure then you are in for a lot of unnecessary work as the project nears integration.
  45. 1 point
    kotra sharmila

    sdsoc_opencv error

    Hi , Thank you very much for this platform its showing video i/o demo and build perfectly i will try with my own project if i got any doubts i will ask you. Regards, K Sharmila
  46. 1 point
    xc6lx45

    FFT / iFFT / RS - Basys3

    OK that starts to make more sense. So one channel is reference signal e.g. transmitted signal, one channel the received reflection. Capture both, FFT, multiply (don't forget the conjugate), iFFT. On the bright side, in this specific case you can solve the circularity issues mentioned above with sufficient zero padding on the transmit signal (rule of thumb: Add enough zeros until all reflections have died down to negligible level). This may be easier said than done with a hardware FFT, though... Resolution is limited to the sample rate. If you want to do better, you can interpolate by stealing lines 315..345 here . Needless to say, this calculation needs to be done on a microcontroller or the like. In double precision it's usually accurate to 1 % of a sample. For a reference algorithm, have a look here (this is more complex and somewhat heuristic but has proven itself over the years). With noise-free data this can be accurate to about one nanosample.
  47. 1 point
    attila

    external p/s for analog discovery 2

    Szia @GaborG Unfortunately Analog Discovery and Digital Discovery are not working with RaspberryPI.
  48. 1 point
    Hi @jli853, I reached out to one of our design engineers about this forum thread. They responded that "Unless you do a non-blocking (overlapped) transfer the time it takes to execute the function will include not only the time to transfer the data over USB but also to shift it onto the JTAG scan chain. When the function returns all data has been transferred to the target JTAG device. How long that takes is going to very with the TCK frequency, as well as the PC side hardware and operating system. I don’t have any measured data to provide." thank you, Jon
  49. 1 point
    xc6lx45

    Diving in

    >> it seems like it's very desirable to have a pure sin wave, Welcome to the world of radio engineering :) Very quick answer: Many modern receivers (e.g. take your cellphone) use a digital divider for LO generation that outputs a square LO signal. It actually gives higher mixer gain (which is good for noise) since the "switches" in the mixer conduct 100 % of the time and improves balance issues. The downside is, you get strong spurious responses at n times the LO frequency, which should be suppressed by filtering at the antenna side, before the mixer. But this is one problem from a very long list that you can probably ignore for a while. Generating a square LO is straightforward - simply use the clocking wizard to instantiate an MMCM/PLL. The chip does include LC oscillators (of which Colpitts is a textbook example) and they are digitally programmable. They can also provide 90 degree phase shifted outputs from a built-in divider. BTW, if you downconvert the ADC signal in software: You need a _decimating_ lowpass filter. Either that, or the number of MAC operations skyrockets (calculating samples that are mostly discarded).
  50. 1 point
    shahbaz

    How to read from SD card on ZYBO

    hi @jpeyron, I followed the guide at GitHub under Readme in PMODSD. can you please guide me step wise on how to start from block design and than going to SDK and running the demo. I have added the pmodsd and zynq PS IPs, after auto connection and running the generate bitstream I get following error. I need your guidance at this