Search the Community

Showing results for tags 'nexys4 ddr'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • News
    • New Users Introduction
    • Announcements
  • Digilent Technical Forums
    • FPGA
    • Digilent Microcontroller Boards
    • Non-Digilent Microcontrollers
    • Add-on Boards
    • Scopes & Instruments and the WaveForms software
    • LabVIEW
    • FRC
    • Other
  • General Discussion
    • Project Vault
    • Learn
    • Suggestions & Feedback
    • Buy, Sell, Trade
    • Sales Questions
    • Off Topic
    • Educators
    • Technical Based Off-Topic Discussions


  • Community Calendar

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 15 results

  1. When I ported the w11 CPU design from Nexys4 to Nexys A7 I didn't use the SRAM to DDR component but wrote my own interface layer which queues writes and includes a 'last row buffer', see sramif_mig_nexys4d and sramif2migui_core. I had a look at the Nexys 4 DDR Xilinx MIG Project and was a bit astonished to see that the SYS_CLK was 200 MHz <TimePeriod>3333</TimePeriod> <PHYRatio>2:1</PHYRatio> <InputClkFreq>200.02</InputClkFreq> I really wonder why Digilent recommends this. It is possible to use 100 MHz, to use the board clock directly, and to avoid a PLL/MMCM to generate 200 MHz. In my design the MIG runs with 100 MHz and seems to work. So question: What was the reason use 200 MHz (and thus an additional PLL/MMCM) ?
  2. sara1993

    SRAM to DDR component

    I tend to create a project for DDR interface, as I searched I found, the straightforward way is to use the Digilent-provided DDR-to-SRAM adapter module which instantiates the memory controller and uses an asynchronous SRAM bus for interfacing with user logic. I follow this link, and now I need to know about the Protocol. would anyone please let me know the used protocol in the mentioned programme.
  3. Hi, I recently discovered that the wrong scan codes are sent for certain keys. This is tested with my own PS2 keyboard controller, and the same behaviour is present with the official demo: I have tested two keyboards: One Logitech K120, and one Microsoft comfort curve 3000. The following behaviour is exhibited: Left arrow set 2 scancode should be: E0 6B / E0 F0 6B. Actual: 6B / F0 6B Up arrow set 2 scancode should be: E0 75 / E0 F0 75. Actual: 75 / F0 75 Down arrow set 2 scancode should be: E0 72 / E0 F0 72. Actual: 72 / F0 72 Curiously, the right arrow scancode is correct. Numpad division set 2 scancode should be: E0 4A / E0 F0 4A. Actual: 4A / F0 4A Can anyone confirm? Online Documentation for Altium Products - PS2 Keyboard Scan Codes - 2017-09-13.pdf
  4. Hello. I wish to write assembly code for ADXL362 accelerometer on Nexys4 DDR and compile for use with Picoblaze softcore processor. Are there any assembly routines that I can use to establish communication with the accelerometer please? Any links to code would be appreciated.
  5. fLx

    Nexys4 ddr Ethernet

    Please need help on knowing the function in which nexys4 ddr ethernet echo_server example use to send back data sent to it from the client "tera term".......?
  6. ched

    nexys4 ddr ram pin

    Hi, I am trying to find the pin of the ram of the nexys4 ddr to update my constraints, but I can't find it in the provided xdc file. Please help
  7. Hello I want to send an image from nexys4 DDR to basys3 through pmod. Is it possible? If yes then how to connect them. Nexys4 DDR will be master device.
  8. Hi, I am using Vivado 2016.4 to program the Nexys4 DDR 7-segment display. I have a very simple VHDL project, which works as follows: 100 MHz clock is used to increment an 8-bit counter when this counter overflows, it inverts the value of a local signal called "slowclk". Hence, "slowclk" is "clk" divided by 512. the "slowclk" is used to increment another 8-bit counter, the output of which is assigned to the 7-segment display segment selector pins on the board. Complete VHDL source: Note: I understand that given such division, the effect on the digit segments will still not be visible - I just want to demonstrate the timing problem. However, the design fails to meet timing constraints as follows in attached pictures: Timing constraint failures in more detail, including the full source VHDL: Clock routing on the FPGA: The following is the .xdc constraints file (commented-out definitions are omitted): From what little I know about FPGA clock routing and resources, I understand this to be due to the high-frequency clock and associated logic being in different regions to each other, thus requiring the implementation run to route the clock signal through awkward paths; as a consequence, the total signal propagation time is such, that before the logic relevant to the current clock pulse is evaluated, the next clock front is already present. Am I correct in this thinking? And in either case, how can I fix the timing issues that Vivado warns about?
  9. Hi all. I would like to ask you a question regarding the RAM/DDR controller of the Nexys4DDR. I would like to access (IP parameters in the MIG as in ) the ddr memory whose component is shown below using a 16b width data. For this, if I am correct, the address is handled RANK_BANK_ROW_COLUMN. So I do not understand why in the provided code from Mihaita Nagy they create the user_interface address like these mem_addr <= ram_a_int(26 downto 4) & "0000"; Here, mem_addr has 27b width. Similarly, I wonder why in the ram control the mask and the read data uses the following LSB and not the MSB of the address: case(ram_a_int(3 downto 1)) is when "000" => if ram_ub_int = '0' and ram_lb_int = '1' then -- UB ram_dq_o <= mem_rd_data(15 downto 8) & mem_rd_data(15 downto 8); ... Would not it be more sense to store the 16b words contiguously, starting at the address 0? and if so, how would the vhdl code look like? Thank you very much for your time, and regards. The component used looks like component ddr_xadc port ( -- Inouts ddr2_dq : inout std_logic_vector(15 downto 0); ddr2_dqs_p : inout std_logic_vector(1 downto 0); ddr2_dqs_n : inout std_logic_vector(1 downto 0); -- Outputs ddr2_addr : out std_logic_vector(12 downto 0); ddr2_ba : out std_logic_vector(2 downto 0); ddr2_ras_n : out std_logic; ddr2_cas_n : out std_logic; ddr2_we_n : out std_logic; ddr2_ck_p : out std_logic_vector(0 downto 0); ddr2_ck_n : out std_logic_vector(0 downto 0); ddr2_cke : out std_logic_vector(0 downto 0); ddr2_cs_n : out std_logic_vector(0 downto 0); ddr2_dm : out std_logic_vector(1 downto 0); ddr2_odt : out std_logic_vector(0 downto 0); -- Inputs sys_clk_i : in std_logic; sys_rst : in std_logic; -- user interface signals app_addr : in std_logic_vector(26 downto 0); app_cmd : in std_logic_vector(2 downto 0); app_en : in std_logic; app_wdf_data : in std_logic_vector(127 downto 0); app_wdf_end : in std_logic; app_wdf_mask : in std_logic_vector(15 downto 0); app_wdf_wren : in std_logic; app_rd_data : out std_logic_vector(127 downto 0); app_rd_data_end : out std_logic; app_rd_data_valid : out std_logic; app_rdy : out std_logic; app_wdf_rdy : out std_logic; app_sr_req : in std_logic; app_sr_active : out std_logic; app_ref_req : in std_logic; app_ref_ack : out std_logic; app_zq_req : in std_logic; app_zq_ack : out std_logic; ui_clk : out std_logic; ui_clk_sync_rst : out std_logic; -- device_temp_i : in std_logic_vector(11 downto 0); -- not used, inside the core init_calib_complete : out std_logic); end component;
  10. I've done quite a bit of work with using the Cortex A9 on the Zeboard, and as such am at home in the tools flow. I also have a Nexys4 DDR, and need a processor for a project I'm working I figured I'd give the Microblaze a whirl. I did the Hello World tutorial that is on the Digilent site...pretty straightforward, except for one strange bit of behavior: When I run my hello world app, main runs twice, both in debug and release mode. I stepped through the code, and it appears that somewhere in cleanup_platform(), main gets invoked again (recursively). This ends up running init_platform() again, as well as printing my message to the screen twice. I am going to see if I can track down where in cleanup_platform() this is happening, but was wondering if somebody else has seen this, and can offer an explanation and/or fix? My design is completely according to the tutorial right now...I haven't added any of my own HW logic or additional SW code yet. UPDATE: I did some further debugging, and it looks like this happens during the call to Xil_ICacheInvalidate() (xil_cache.c). Getting more specific is difficult as it appears to happen some time during the loop in microblaze_invalidate_icache.S line 70. I'm no expert with the MB ISA yet, but can certainly deduce that this is walking each cache line, and I'm assuming, marking them as invalid. While doing this debugging, I actually got the processor into a state where this recursive call into main would continue as long as I'd let it, each time, printing out a new message. Very strange. Thanks, Dave
  11. I was trying to run echo server example provided on digilent official website. The validation of block diagram is successful also HDL wrapper is successfully generated. Bit file generation is also completed and succefully exported to SDK without error, but while testing program on hardware it do not show any reception of packets. when we ping board status seems to be connected but console window in SDK do not show any details about connection. what are possible causes for error and solutions. console status: ping status: it should be like this: Thanks Regards, IMran
  12. For a piece of research I am looking to conduct, I am wanting to compare the efficiency of hardware multipliers with different types of designed multipliers. As I have a NEXYS4 DDR, I am looking to use the 25x18 multiplier on board as a basis for comparison. Is there any reference as to what the hardware design is for the on-board multiplier so I have a better frame of reference of what I am comparing to (i.e. is it a Wallace tree multiplier?). --RG
  13. What usb device drivers should be installed? What are their names? Where do they come from? Xilinx? Digilent? Microsoft? Arduino? When I plug nexys4_ddr board into windows 10 computer, it installs FTDI drivers called "usb serial converter" and has two ports A and B These drivers are obviously wrong because the computer makes a sound as if the USB driver is with drawing and reattaching, over and over again. In my experience this is because the board is drawing too much power or the cable is bad. The port works fine with other devices and the cable is the one that shipped with the nexys4 ddr. Here is a youtube video of the problem with more detail:
  14. Hi, I've been having this problem since I got the board and finally decided I want to get rid of it. I'm using a Nexys4 DDR Board and have downloaded the XDC file from the board website. In my projects I've got the board selected in my project settings as you can see. Nevertheless I always get these warnings which are really annoying. They are stating some completely different board and I've got no idea where they are coming from. Does anybody know how to get rid of them? By the way, at the Power tab at Project Summary window it always shows my confidence level as Low. I assume thats because most inputs are missing user specificatons as stated when you click on it. What setting sould I use for unused pins? At the moment the unused ones are simply commented out in the XDC file. Thanks!
  15. Pedro

    Nexys4 DDR & MIG

    Hi, I am trying to understand how to generate a MIG based memory controller in the Nexys4 DDR board, I am reading the UG586 (Memory Interface Solutions), but I am not sure about the system clock and the reference clock. Should I mark them as "no buffer" in the GUI and connect them in the top level source with a 200MHz clock to both of them? Thanks in advance for your help best regards, Pedro