Leaderboard


Popular Content

Showing content with the highest reputation since 11/12/18 in all areas

  1. 2 points
    You're welcome! I am glad it is working. Yes, this thread should help others trying to run the Pcam 5C demo on Zybo Z7. Best Regards, Ionut.
  2. 1 point
    Connecting Intel (Altera) and Xilinx worlds with a cheap cable. I've been doing FPGA development using Altera and Xilinx development tools for many years now. This has produced a lot of years long itches that I've found hard to make go away. Generally, these irritations are caused by obstacles thrown in my way by vendors wanting get money out me. It's really hard to find inexpensive Altera based development boards with an Ethernet PHY not connected to an ARM PS, or with a decent UART port or any useful USB port. However, you can find ways to connect Altera based development boards to ADC/DAC devices with reasonable performance. In the Xilinx world it's the other way around. Both vendors have made playing with transceivers very difficult, especially for the non-premium devices. Both vendors try to use their soft-processor based development flow as the only way to do anything useful with their development boards. The HSMC has long been the standard IO interface providing a reasonable number of IO for both low speed and high speed uses. But try and find a reasonably priced Xilinx development board with an HSMC connector. For too many years the 8 signal PMOD has been the only IO available in the Xilinx world until recently when boards with an FMC connector have become available. Recently, expensive Altera boards with an FMC connector have also become available. So, I have a lot of hardware that can do a lot of things... except what I want. What to do... what to do... Recently, I released an Ethernet test tool to the Digilent Project Vault. If view counts are any measure there hasn't been much interest. I've recently make a demonstration project that resolves a few of the previously mentioned itches. Below is a brief description. The project connects my ATLYS board to two channels of 100 MHz ADC and DAC interfaces. The ALTYS uses the high speed USB 2.0 Adept interface to connect to a C program for downloading DAC waveforms to and upload ADC samples from the DDR2. DAC waveforms can be of arbitrary length. All of this data goes through the ATLYS Ethernet PHY to an Altera Cyclone V GT based development board with 2 HSMC connectors and the rare Ethernet PHY - FPGA fabric connections. One of the HSMC connectors has a Terasic DDC board with 2 250 MHz DACs and 2 150 MHz ADCs. At best Gigabit Ethernet supports 125 million bytes/s full duplex data rates... but the good news is that this is, unlike USB, a sustainable rate with very low latencies. Currently, the project runs all 4 converters at a 100 MHz sample rate. The sample rates supported through the Ethernet cable are 25 MHz. DAC samples from the ATLYS use 4X interpolating filter in the Cyclone FPGA to create 100 MHz samples. ADC samples are decimated to 25 MHz sampling rates. DAC data is sourced from 2 16 KBx16 block ram DPRAM waveform buffersi a ping-pong arrangement so that I can write new waveform data without disturbing the DAC outputs. Whenever the read pointer crosses from one half of the buffer to the other half the Cyclone sends an ADC packet to the ATLYS with 8192 samples. The start of the packet is used as a synchronizing signal to the ALTYS to know when to send the DAc packet. The Ethernet PHYs transfer 100 million bytes/s for DAc waveforms longer than 16384 samples continuously. That's the overview. Why bother to post this? I'm not the only one with an itch problem. Hopefully, this project will spark some interesting solutions to their problems. I've provided 2 pictures to show what's going on. In both CH1 and CH2 are the DAC outputs. CH3 is the ADC packet and CH4 is the DAC packet. Notice the latency bewteen the packets in the blowup image.
  3. 1 point
    Hi @Iman, I have sent you a PM about this. cheers, Jon
  4. 1 point
    They are to get a negative supply out of the positive digital output from the uC. Since the output of the uC is between 0 ... 3.3 V max. VREF1V5 is the node determining at which point the IC10A will switch from positive output to negative and vice versa. The opamp will always attempt to keep the difference between inverting and non-inverting inputs zero. VREF3V3 is a pullup of the inverting input, if the inverting input is pulled below 1.5V, the IC10A output will become positive in order to bring the inverting input back to 1.5V. On the other hand, when the inverting input is above 1.5V, the output of the IC10A will become negative to bring the inverting input back to 1.5V. I guess VREF3V0 could have been a higher voltage as well. But VREF1V5 should be as close to the center of the uC supply as possible in order to achieve symmetric output and the best resolution thereof. I'm also guessing they were not willing to rely on a stable supply voltage from the linear 3.3V regulator and probably already required a precision 3V reference for other purposes.
  5. 1 point
    1) if you want to use your HDL and bitstream then yes. And you can promise me, that you want this, because using the processing unit without any hardware won't make sense and will work. you need the board files that your hardware and software knows the declaration of all kind of pins and where they are connected 2) the is a button on top of SDK. Something like flash fpga. the other possibillity is to do this with open hardware manager greetings,
  6. 1 point
    D@n

    Hello!

    @oliviersohn, If you want to start simple, you might wish to try the tutorial I've been working on. I have several lessons yet to write, but you may find the first five valuable. They go over what it takes to make blinky, to make an LED "walk" back and forth, and then what it takes to get the LED to walk back and forth on request. The final lesson (currently) is a serial port lesson. My thought it to discuss how to get information from the FPGA in a next lesson. Dan
  7. 1 point
    zygot

    Nexys Video DPTI transfer timeout

    Well the good news is that your CPU is either in programmable logic or connected to it. Since you mention wishbone I assume that it's a soft processor. If you can't get data out of a FIFO quickly enough to support a full USB packet then you don't have a lot of options other than to make a larger FIFO. But 4KB should be plenty large enough for this job unless your software design needs a re-think. You can always DMA bursty data into a large DDR buffer... but eventually your CPU will have to process all of the data the the Adept interface is tossing its way... so my thought is that you're avoiding addressing the real bottleneckfor some reason. Most of the time when I figure that I can work around a thorny and complicated issue by being clever I end up doing exactly what I always knew had to be done but after expending a lot of time and energy proving to myself that I'm not usually as clever as I want to be; and that being lazy or avoiding things I'd rather not deal with never works out.
  8. 1 point
    jpeyron

    Voice-activited

    Hi @Junior_jessy, Are you referring to the VHDL Pmod MIC3 project linked above done by @hamster? What range of acoustic sound were you testing? The Pmod MIC3 has a MEMS Microphone Knowles Acoustics SPA2410LR5H-B acoustic sensor and a Texas Instruments ADCS7476 ADC. I would guess the LEDs are staying the same due to the acoustic range you are testing. thank you, Jon
  9. 1 point
    Thanks for all the help! With your instructions I was able to get the demo working on the Z7-10 board. Hopefully this thread will help others who run into similar issues.
  10. 1 point
    Thanks @attila I missed the AC coupled input. It working as expected now.
  11. 1 point
    Hi @Amin, You are trying to use a SDSoC Platform designed for SDx 2017.4 in SDx 2018.2. There could be incompatibilities between different versions. Try to switch to SDx 2017.4. Digilent did not release a SDSoC platform for SDx 2018.2 yet. Regarding the building time, it could take between 30 min and 3 hours (from my experience). It depends on your system configuration (OS, CPU, RAM, HDD) and project complexity. Best regards, Bogdan
  12. 1 point
    Hi @Amin, Welcome to the forums. I have not work with SDSoC enough to be able to speak on this topic . I have reached out to see if any of my co-workers will have additional input for this thread. We do have a SDSoC platform made in 2017.4 for the Zybo Z7-20 here. I did find a xilinx forum thread here that might be useful to this issue. thank you, Jon .
  13. 1 point
    jpeyron

    Voice-activited

    Hi @Junior_jessy, Here is a VHDL project for the Pmod MIC3 using the basys 3 done by a community member @hamster. I would suggest using the basys 3's xdc as a reference for the xdc of the Nexys 4. Please attach your xdc and hdl code. We have not had time to create an IP Core for the Pmod MIC3. If you were wanting to use microblaze and an IP Core you can use the Pmod DPG1 and alter the sdk code to interact with the Pmod MIC3. You can download the vivado library here . Here is a tutorial on using the Digilent Pmod IP Cores. You will need to use the digilent board files. Here is a tutorial on how to install the board file.
  14. 1 point
    Hello @Blake, I've created for you an image that test your FMC-HDMI adapter. It does a basic data transfer between the HDMI output of the ZedBoard and both of the adapter hdmi inputs. Prior to this it also uses all the I2C lines. Please check the .rar attached file. In order to recreate the test, please fallow the fallowing steps: 1.Make sure that you have everything in place, check the bellow instructions and the first image. Connect USB cable from PC to ZED USB PROG port (J17) Connect USB cable from PC to ZED UART port (J14) Connect FMC-HDMI board to FMC connector J1 (of ZED) Connect Power cable to J20 (of ZED) Set mode jumpers for JTAG programming (all to GND) Set J18 (of ZED) jumpers to 3V3 or 2V5. I'v tested both variants. Create a loop between HDMI-OUT J9(ZED) and FMC-HDMI IN1 of the adapter. Turn ZED board on 2. Open Vivado (I used Vivado 2017.4). Open Vivado, and click on Open Hardware Manager within the Welcome Page. After this click on Auto-Connect. You should see the Zed into the upper left panel.Check the image bellow. 3. Add Configuration Memory Device Right click on xc7z020_1 and choose "Add Configuration Memory Device". Check image bellow. 4. Choose the right memory device for ZED. Please choose "s25fl256s-3.3v-qspi-x1-dual_stacked" from the list. Click to program the device. Check images bellow. 5.Program the device with the files attached to this message. For "Configuration file" you choose BOOT.bin. For "Zynq Fsbl" you choose fsbl.elf. Click OK. 6. Wait until it gets programmed. After finish, you click OK. 7. Prepare the board for testing. Open a serial terminal, termite, putty, teraterm etc. Find the COM port and choose 115200 for baud rate. Set jumpers for QSPI programming (MIO5 on 3V3 and SIG, the others on 3V3 and GND). Power OFF the Board. Power ON the Board. The image should boot. See the image bellow. 8.Do the actually test. Make sure that the HDMI-OUT (ZED) is connected to HDMI-IN1 of the FMC-HDMI adapter. Press ENTER. Wait for the test to finalize. Make sure that the HDMI-OUT (ZED) is connected to HDMI-IN2 of the FMC-HDMI adapter. Make sure that your adapter is not loose. Press ENTER. Wait for the test to finalize. 9.Check the results, and give me an update . image.rar
  15. 1 point
    Hi @mrpackethead, I completed and verified a Free_RTOS LWIP echo server on the Arty-A7-35T in Vivado 2018.2 here. As you mentioned above i had to replace the "xadapter.c". The "xadapter.c" file is attached to this forum here. I also had to changed timers.h to timeouts.h in the "xemacliteif.c" as discussed in the xilinx forum thread you linked above. The echo server works but it takes almost a minute to echo back the text sent. We would suggest that you reach out the xilinx and Free-RTOS about the extreme delay in this echo server template. thank you, Jon
  16. 1 point
    Hi @JColvin, Thanks very much for the information! I followed Jon's advice and it worked! Thanks again, Min
  17. 1 point
    Hi @spri Using the channels in master mode/synchronized, configured with -1, these will will be in the same state at the same time. You won't have different wait-run time. The previous WF application screenshot translates to WF SDK code like: dwf.FDwfAnalogOutNodeEnableSet(hdwf, c_int(0), AnalogOutNodeCarrier, c_bool(True)) dwf.FDwfAnalogOutNodeEnableSet(hdwf, c_int(1), AnalogOutNodeCarrier, c_bool(True)) dwf.FDwfAnalogOutTriggerSourceSet(hdwf, c_int(-1), trigsrcExternal1) dwf.FDwfAnalogOutRepeatSet(hdwf, c_int(-1), c_int(0)) # infinite repeat dwf.FDwfAnalogOutRepeatTriggerSet(hdwf, c_int(-1), c_int(1)) # wait for trigger event in each cycle dwf.FDwfAnalogOutIdleSet(hdwf, c_int(-1), c_int(1)) # idle output offset dwf.FDwfAnalogOutWaitSet(hdwf, c_int(0), c_double(0)) # start immediately dwf.FDwfAnalogOutWaitSet(hdwf, c_int(1), c_double(sDelay)) # wait after trigger dwf.FDwfAnalogOutRunSet(hdwf, c_int(0), c_double(sTotal)) # run time dwf.FDwfAnalogOutRunSet(hdwf, c_int(1), c_double(sTotal-sDelay)) # channels will stop at the same time dwf.FDwfAnalogOutNodeFunctionSet(hdwf, c_int(-1), AnalogOutNodeCarrier, funcSquare) dwf.FDwfAnalogOutNodeFrequencySet(hdwf, c_int(-1), AnalogOutNodeCarrier, c_double(1)) dwf.FDwfAnalogOutNodeAmplitudeSet(hdwf, c_int(-1), AnalogOutNodeCarrier, c_double(5)) dwf.FDwfAnalogOutNodeSymmetrySet(hdwf, c_int(-1), AnalogOutNodeCarrier, c_double(0)) # 0V while running, +5offset -5amplitude dwf.FDwfAnalogOutNodeOffsetSet(hdwf, c_int(-1), AnalogOutNodeCarrier, c_double(5)) # +5V in idle dwf.FDwfAnalogOutConfigure(hdwf, c_int(0), c_bool(True)) dwf.FDwfAnalogOutConfigure(hdwf, c_int(1), c_bool(True))
  18. 1 point
    Hi @Min, Adding to @JColvin's post, looking at your projects path on your block design you have a space. Vivado/SDK has issues with spaces in the path to the project you are using. Please remove the space and try generating the bitstream again. cheers, Jon
  19. 1 point
    Ciprian

    Zynq book - tutorial 5 Zybo Z7

    Hi @n3wbie, I had a similar problem, for me it was the fact that I did not have enough space allocated to the stack in the linker script, if you changed the dimension of the RecSamples variable then that might be the issue. Regarding the sin wave, I'm not sure what you want to do but if you want to generate a sin wave from within the and then play it back to the Head Phones then you can simply use the sin function in C (you need to add the math.h library and make sure you activate the library in the project settings, described here). Otherwise you can set Mic or Line in, connect the jack to you PC and play a Sin video from youtube then you can look at the recorded samples. I'm guessing you are more familiar with MATLAB, you can try that to; the idea is that as long as you are feeding it the right samples either way works. Hope this helps, Ciprian
  20. 1 point
    zygot

    Nexys Video HDMI in problems

    To ALL. Read the Series 7 Select IO reference manual. HDMI uses the TMDS_33 standard which requires 50 ohm pull-ups to 3.3V on the terminus end. TMDS is a current sink and not a current source, similar to open-collector ( open-drain ). While the FPGA pins may power up in a high impedance state before configuration there is nothing preventing external devices, such as a monitor, from driving current into your FPGA board power rails while the board is un-powered. The HDMI port isn't the only means to create this kind of problem. This is why anyone connecting an external circuit to FPGA pins needs to understand the IO buffering configuration before powering their equipment. it would be nice for FPGA board vendors to alert users to the issue but ultimately it's the user who has to be aware of what they are using. For all logic standards is is certainly possible to design hardware that safely meets a particular interface requirement such as HDMI, that presents the possibility of connection issues, but the cost may be prohibitive.
  21. 1 point
    Despite the fact that I may have left many trying to read my last reply in a semi-coma state it occurred to me that I forgot to address the second diagram in the original post. This diagram refers to a 50% threshold for buffers and logic. Depending on the logic family the decision point at which buffers and logic determine whether or not an input is a logic high or a logic low may not be halfway between the minimum and maximum levels. In fact most MSI and LSI families have ranges for both logic high and logic low and a third range in the middle where the state of the input is undetermined. It's quite possible for a gate to see an input that is higher than the defined logic high range and lower than the defined logic low range. I such cases what is the input logic state???? ... exactly. From the previous discussion of timing analysis you can see how things can get complicated quickly as you widen your scope of analysis. If nothing else perhaps I've proven that such questions as asked by tip cannot be properly addressed in this kind of forum. Trust me when I say that I've only scratched the surface.
  22. 1 point
    I'm not so sure that there is a universally precise answer to your question. If you want to analyze the timing of a circuit you need to define the terms that you use to do it. In general propagation delay, at least to me, has meant the delay incurred due to combinatorial logic gates, buffers, wires etc. This delay is specific to any two points in the schematic. It is also temperature dependent. Propagation delay is important not only for knowing when a switching transition will occur relative to switching transitions in related logic but also in how long a signal might remain in one state before switching to the other. Obviously, clock signals have delay across a design as well. Clocks don't go through logic gates but certainly can go through buffers and wires. Whatever terms you choose to use, any timing analysis of a clocked circuit has to account for the relative time delay of the clock edges everywhere that clock is used in a circuit as well as delay of the combinatorial logic relative to an edge(s). If the rising edge of a clock is used throughout a circuit, skew is generally use to define the delay between that edge between any two points in the circuit. In a large system with a lot of clock buffers and multiple circuit boards minimizing skew can be a real headache. Usually, clocked logic involves combinatorial logic that is sampled by a clock edge(s). All of the delays are important to analyze from a timing perspective. If you have a very wide clocked signal, say 256 bits, there will be a time delay in transition between any of the 256 bits from clock edge to clock edge. These delays can increase or decrease along a circuit path, depending on logic or just propagation delay down a wire connection. When a delay exceeds the clock period you've got trouble. A good rule of thumb for clocked logic is to keep the combinatorial logic between clock edges simple to minimize the delay through it. Generally, I think of latency in a clocked signal as when data is valid and has to do with how many levels of clocking the signal goes through from any two points in a circuit. An example is a RAM that might have one or more clock levels for the address and incoming data as well as one or more levels of clocking on the output data. In order to know when data is valid from address to data output you need to know the latency, which hopefully is fixed. For pipe-lined designs you often have to keep track of pipe depth to know where the data is at any portion of the circuit. The clock tree that you show is generally not what you will find in programmable logic devices. These devices route clock signals differently than logic signals. Usually there are a limited number of clock lines that can reach logic anywhere in a device with very low delay. Some devices have clock regions that limit their reach in order to control delay. FPGA device generally use LUTs instead of logic gates which involved a different analysis relative to, say using MSI logic gates. Anyone using a particular FPGA device should read the vendor reference manuals for that device to understand the clocking, logic, and IO resources. You can't design with MSI and LSI gates effectively just using the logic table, nor can you do FPGA development effectively without understanding the basic structures involved. At least with MSI and LSI gates you have complete control over where a particular portion of your circuit will reside and how the interconnections are made. In a very large system this can get very hard to manage. In an FPGA your control over where portions of your logic reside is much less. Here's where it's important to have some level of understanding about how your vendors' synthesis, timing, and place and route tools work. In very large devices where resource utilization percentages are very high and clock rates are high getting consistent, repeatable results across an environmental temperature range can be a point of extreme frustration. If your design methodology ( source HDL code ) is fighting the tools preferences then your miseries will be compounded appropriately. Sorry if this was too long-winded; I stepped into a puddle that was deeper than first glance...
  23. 1 point
    Hi @JColvin, I'll write a detailed article on our website during next month (with more screenshots/video). Below some more details: - OpenScopes are connected to a BeagleBone via a powered hub. On the same board there's a service that connects to the 3 scopes and exposes them via a REST API. The service basically forwards commands (Digilent Instrumentation Protocol) to the scope via USB and sends back the responses, making sure valid responses are sent/received before forwarding. The REST API also offers some control commands (e.g. check status). The idea is similar to the "Digilent Agent", but it it is multidevice and it is written all in Python. aiohttp (asyncio) is used for the server code. The firmware has been slightly adapted so that trigger from device 1 is sent to LA of device 2 and 3, the trigger from device 2 to 1 and 3, etc. This allows to configure any device analog channel as a trigger source for all devices. - The desktop application interacts with the REST API (also via aiohttp), but offering a unified experience so that the user feels there's a single device with 6 channels. It is written in Python, using PyQt/PySide for the GUI part. It offers two functionalities, the "scope view" (screenshot above) and the "recorder view" (we're still polishing the last bits of it). It also has a dark/light theme and it's multilanguage. Best regards, Gerard
  24. 1 point
    OK thanks. Yes, updating that tutorial would save a lot of time and confusion. I later noticed that Xilinx's page for 2017.2 has a bit more description relating to free WebPACK than the page for 2017.3, though it's still not clear how to invoke the free aspect. Further confusion is added by the Xilinx page you arrive at from Vivado's License Manager, as that page omits the Activation-based licenses, and the licenses it does show include a Free one for pre-2015, as though you can't license 2016 and later for free. Evidently that doesn't mean you can't use 2016 and later, it means that no license is required, and you don't need to be using the License Manager at all!
  25. 1 point
    My design works! It receives the bits at 1.485 Gb/s, and then sends them out again. http://hamsterworks.co.nz/mediawiki/index.php/Artix_7_1080p_passthrough That leaves the dynamic sampling phase adjustment, symbol alignment and decoding as an an exercise for the reader :). One thing that stumped me for a while was that I needed to explicitly use a BUFIO for the fast clock, not letting it default to a BUFG.