All Activity

This stream auto-updates     

  1. Past hour
  2. I'd like to run RISC-V with very basic (command line only, obviously) Linux on a Nexys A7 just to play around with it. Since I'm a beginner I find it hard to adapt steps for other boards. If anyone could suggest a tutorial that either matches Nexys A7 exactly, or would be as easy as possible for me to adapt, that would be great.
  3. Today
  4. Hello forum, I am a master student with a project involving Impedance measurements. I was thinking about buying the Analog Discovery 2 with the Impendance Analyzer but i would like to use BNC probes and i was wondering if i could use the BNC adapter to take the measurements instead of the imput conections provided by the impedance analyzer. Thanks in advance.
  5. @attila This seems to be a bug with WaveForms. I successfuly captured data with the same configuration (see screenshot above) on WaveForms 3.8.2 (Linux/Windows) while WaveForms 3.10.9 (Windows) and 3.11.4 beta (Windows) lost digital samples.
  6. I want to create bsp for 2018.3 to zybo-z7-10. I am working on petalinux and vivado 2018.3 with zybo-z7-10. i found out that you released bsp till 2017.4. So i try to create bsp for 2018.3. i referenced some other guides. They said I want to add petalinux edk repository in xsdk after creation of hdf from vivado. But i cannot find that repo in petalinux 2018.3. /petalinux/2018.3/components. In this i cannot find that repository. I stucked up with this step. Whether i want continue from this step or is there any other way to create a bsp file?
  7. Thanks for the answer. I also tried without data compression and that doesn't fix the issue.
  8. Hi @Vroobee, I experienced this when the WiFi signal of our local network was not strong enough where the OpenScope was located. Also check if you have a MAC filter or static IP adresses enabled in your local network. I think the Scope will prompt that error also when the connection has been refused by the other side for any reason. Regards Fabian
  9. Hi @Phil_D, Where do you see ADG612? OpenScope MZ uses TS3A5017 to switch between 4 gains (0.075, 0.125, 0.25, 1). And as far as I can see from the schematic there is no gain switch for the OpenLogger. Are you sure you are asking your question in the correct (sub-)forum? Regards Fabian
  10. Sduru

    AXI4 and Vivado ILA

    In the link , it says that this error is related to XDC file. "The following are some common causes of this issue. XDC constraints are case sensitive. These warnings can occur if the case type of the object name in XDC is not the same as the signal in the RTL code" But in my constraints file, there is no case sensitive problem. I cannot solve the problem. Please help...
  11. Sduru

    AXI4 and Vivado ILA

    Thank you @zygot . I've created the block design without AXI4 Stream like in the following: But, I am getting the following errors: [Common 17-55] 'get_property' expects at least one object. Resolution: If [get_] was used to populate the object, check to make sure this command returns at least one valid object. [BD 41-1273] Error running post_propagate TCL procedure: ERROR: [Common 17-55] 'get_property' expects at least one object. ::digilentinc.com_ip_MIPI_D_PHY_RX_1.3::post_propagate Line 6 Do you have any idea for those errors above?
  12. Am I correct in assuming that there is no testbench and that rgb2grey is your toplevel entity? That is you just let Vivado decide how to simulate rgb2grey? Did you have a timing constraint for the clk period when you implemented the design? Was the timing score 0? Now that I'm trying to read your code the organization needs work. Why is all of your logic in one process? Why is the process creating output when active_i is de-asserted? Also, I'm curious as to your reasoning for changing your internal registers to type integer from std_logic_vector. Lastly, I'm having a hard time correlating the simulation waveforms to your code. I don't see a rst or 24-bit output in your code.
  13. I see in the documentation that the ADG612 gain switch between high gain and low gain to the ADC. I see in Waveforms Spectrum analyzer that there are many gain options: 0.01x, 0.1x, 1x, 10x, 100x. Which hardware gain setting is used in Waveforms for those gain settings? Thanks!
  14. I was able to recreate the image you show above following your instructions so maybe all hope is not lost. I am still getting really strange readings though trying to do my project. I using the AD2 to power a micro-controller with positive V+ set to 3.3V and with the the grounds tied together. In order to get an idea for how much power the microcontroller is consuming I placed a 10 ohm resistor in series with the V+ output from the AD2 and i'm trying to use the scope to measure the voltage drop across that resistor to get an idea of the current being used. Below is the view I am getting from the scope which doesn't make any sense to me. Why would it be centered around -3.212 V? The voltage drop across the resistor should be only a few mV. The square wave does make sense because the program on the microcontroller is being used to flash an LED at that frequency. Any ideas?
  15. Yesterday
  16. Hi JColvin, I borrowed a Xilinx ZC702 board and brought it up. I built a Vivado project, generated bit file and SDK. I can run LED tests via GPIO with ZC702. We really want to use Zedboard since it is smaller. How much work involved to bring up Zedboard? what is included in the Zedboard package? How many useful I/O in Zedboard? Thanks, Shuguang
  17. these two photos maybe a little more revealing about my problem.
  18. @askhunter It's not clear from your pictures what it is that you are referring to since the times scales are different. The purpose of post place and route timing simulation is to show the relative signal path delays in your implemented design as well as possible inferred latches or registers hidden by IP. The RTL simulation merely indicates if your behavioural logic is performing as you intended ( assuming that the testbench is well designed ). It is merely a simplified (no delays, no setup, no hold times) idealistic representation of simple logic. If the timing simulation doesn't give the same results as the RTL simulation then it's unlikely that your hardware will behave as you intend either. In the typical professional setting a lot of people are working on parts of a large design effort simultaneously. No one can afford to schedule a design effort where everything is done sequentially. In such a case timing simulations become a very important indicator of risks of projects not making deadlines. It simply isn't possible to create a lot of hardware, software, test protocols etc sequentially or even in parallel and 2 weeks before shipment throw all that stuff together for the first time and then figure out why things don't work. So we have a lot of ways to do simulation that offer increasingly more accurate, and hence reliable, views of how our design ( after it's been optimized, re-worked and reduced to LUT equations ) might actually work in a system before having to run it in hardware. When there are 10 engineers doing parts of 1 large FPGA design and all of those parts are integrated it's not uncommon for some of them to start failing due to limited routing resources and clock line limitations.
  19. Hi, I am currently running the Pmod WiFi TCP demo on two zedboards and encounter an issue as below: For the server, it gave me a status 0x10003A00. For the client, it gave me a status 26845304. Unfortunately I cannot figure out the definition of the IPstatus in the demo. I checked my vivado project and zedboard setup, which seemed to be correct. I am using USE_WPA2_KEY with the same ipv4 address, ssid, and password btw. Is there any hint for debugging based on the current information? Thanks in advance.
  20. My my... I'm not sure what LFSR you are using but mine have a shift enable input so that I can use any clock that's available but update the LFSR output at almost any update rate needed. You can create a counter to control the shift enable so that it's synchronous with whatever logic is running at 8 KHz and needs data at that rate. It's typical in a design to have lots of parts of the logic changing states at lots of different frequencies. You don't want separate clock domains for all of those rates even if those clocks are derived and phase coherent. Sometimes, for high speed applications you do need a higher, phase coherent clock; like in video where there might be a reference clock and a higher but synchronous pixel clock. In general it's best to have the minimum number of clock domains in a design that you can get away with. FPGA devices don't have long analog or combinatorial delay lines on the order of microseconds or milliseconds. The Series 7 devices do allow adding very small delays to signals coming into FPGA pins via the IDELAY2 primitive. If your device has outputs on pins on an HP bank you can also add similar small delays to output signals using the ODELAY2 primitive. Synchronous delays lines using counters and enables as I mentioned before are the normal way to achieve teh equivalent of the analog delay line that used to be part of some digital logic long long ago.
  21. Hi , I need to use 8 kHz as a clock signal for my LFSR IP core in my block design. But this low rate can not be implemented in ARTY 7 , as shown in the attached picture ! What are the other choices I have in order to achieve the output of LFSR at the low rate that I want ? I read about delays in FPGA , but I found delays are not synthesized in FPGA ! Looking for your help, Thanks
  22. @jpeyron if I ignore the critical warnings and generate the project and start SDK I will have problems making the board support package. See the screenshots below. Interestingly the KYPD also has a wrong board setting as the SD IP (arty instead of nexys-a7-100t) but it will work in SDK. Thank you for your help.
  23. Thanks Jon and Zygot, I appreciate the explanation. Regards
  24. Hi, I design a module for rgbtogrey. When i start post implementation timing simulation,I get the following result.(gryodata) Everything was seamless until post implementation timing simulation, but here I came across something like this. I would be very happy if someone could help me with this subject. library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all; use work.types.all; entity rgb2grey is Port ( clk : in std_logic; active_i : in std_logic; active_o : out std_logic; rgb_in : in std_logic_vector(23 downto 0); gry_o : out std_logic_vector(7 downto 0) ); end rgb2grey; architecture Behavioral of rgb2grey is signal active,active2,active3,active4: std_logic:='0'; --signal temp_gry,temp_gry2,temp_gry3,temp_gry4 : std_logic_vector(7 downto 0):=(others=>'0'); signal temp_gry,temp_gry2,temp_gry3,temp_gry4,tempy_reg,temp_son : integer:=0; --temp_gry<= std_logic_vector(to_unsigned((to_integer(unsigned(rgb_in(7 downto 0))) + to_integer(unsigned(rgb_in(15 downto 8))) + to_integer(unsigned(rgb_in(23 downto 16))))/3,8)); BEGIN process(clk) begin if rising_edge(clk) then if active_i='1' then temp_gry<= to_integer(unsigned(rgb_in(7 downto 0)))/3; temp_gry2<=to_integer(unsigned(rgb_in(15 downto 8)))/3; temp_gry3<=to_integer(unsigned(rgb_in(23 downto 16)))/3; active<='1'; else active<='0'; end if; temp_gry4<=temp_gry+temp_gry2; tempy_reg<=temp_gry3; temp_son<=temp_gry4+tempy_reg; gry_o<=std_logic_vector(to_unsigned(temp_son,8)); active2<=active; active3<=active2; active4<=active3; end if; end process; -- gry_o<=temp_gry3; active_o<=active4; end Behavioral;
  25. Thank you @jpeyron for your answer. I tried using the on board SD and the external, none of them were working. In the following set up I use a PmodOLEDrgb and a PmodSD (onboard config) IP block. The OLED IP works fine, when I add the SD IP, problems occur. I validate te design: Following warnings appear: When I press generate Bitstream following critical warnings appear: I use the hotfix-PmodOLED_RGB library: When I go check the files, for the working OLED and the not working SD I see that in the SD files the wrong board (arty instead of nexys-a7-100t) is set: So can I just change the lines in the files to make it work? Is it a problem in the hotfix library that Digilent needs to solve? Thank you for your help!
  26. Hello, I have a problem concerning the Digital Discovery using the Waveforms SDK. I thought it is possible to use the full 256MBytes on-board recording memory for 8-bit sampling, but I am only able to read 64M samples, no matter if I use 32 bit, 16 bit or 8 bit sampling (which I configure with the function FDwfDigitalInSampleFormatSet). The maximum size I can set via FDwfDigitalInBufferSizeSet() is 64M (i.e. 67108864), no matter what sample size I set via FDwfDigitalInSampleFormatSet(). That is, if I try to set a higher value for the buffer size, FDwfDigitalInBufferSizeGet() will still return 67108864, regardless which sample size is set (8, 16, 32). So I thought this buffer size value is the value for 32-bit sampling, and therefore 4 times as big when using 8 bit sampling. But unfortunately, when I try to read the data via FDwfDigitalInStatusData(), with a countOfDataBytes value that is higher than 67108864, all bytes after the 67108864 byte are zero. I am using single aquisition mode, and start an aquisition with FDwfDigitalInConfigure(handle, true, true) and then wait until the device state becomes DONE. I would be very thankful if someone could explain what I am doing wrong here. The Waveform GUI seems to be possible to do that (64M samples for 32 bit sampling, 256M samples for 8 bit sampling), so I thought it is possible with the SDK, too. Thanks.
  27. @PG_R Typically Ethernet PHY devices such as the Marvel 88E1111 are optimized to work with Cat5e cables. The interface between the PHY and the FPGA is immaterial; it doesn't make any difference whether it's RGMII, GMII, or SGMII. As long as the PHYs on both boards are setup properly to communicate at a particular speed, say Gigabit, and auto-negotiate you can just connect a cable between the boards and the PHYs will establish a connection without any assistance from the FPGA logic or ARM cores. Of course when the PHY is connected to FPGA logic then RGMII and SGMII is a bit more complicated, especially if you want to support 10/100/1000 rates. Most PHYs can be set to automatically switch input/output signals on the cable side so there isn't a driver conflict ( you needn't worry about using a cross-over cable ). The bad news is that programming many Ethernet PHYs is not always easy... the Marvel products require an NDA to see the register functionality. The good news is that for Xilinx and Digilent FPGA boards the PHYs are typically initialized on power-up or reset in the proper mode. For Intel based FPGA boards this is not the case as Intel wants you to be dependent on their IP to use the Ethernet PHY; this extends to their partners in crime. I've used the Ethernet PHYs to pass data between FPGA boards for many years. The next question then becomes; Do you want to use a MAC? For Zynq based boards the Ethernet PHY is connected to the ARM core through a MAC in the PS.. no getting around the MAC. For boards where the PHY is connected to logic you can do anything that you want. Once the PHYs on your boards have auto-negotiated a connection you will be responsible for the actual passing of packets and supporting particular protocols.
  1. Load more activity