• Content Count

  • Joined

  • Last visited

  1. Thanks. That makes sense. In the case of how the tutorial is configured up until 6.4, the only path to the cellular ram is via M_AXI_DP. they (meaning whomever developed the tutorial) could not also connect M_AXI_IP to the same AXI Interconnect, so they added a second AXI interconnect ( and used the two cached connections) - but didn't put that in the instructions. (As a guess, it was a case of originally having used just the local memory connect with an earlier release of lwip, but then lwip got bigger / had more buffers by default, so it was changed, but not entirely consistently). I want to emphasize how extremely helpful you have been. Every step of the way you have provided guidance which has increased my understanding of how this is all put together -- and not at all in a condescending way. I will remember that, and continue speak highly of Digilent. I have learned a lot - including what the AXI interconnect is really all about, which is a good thing, because that seems to be the right way for me to interface the rest of my project with the MicroBlaze. Just in case you are curious, here is a link to my project. (I expect to use the MicroBlaze to support an Ethernet connection to a PC to provide the blinken lights and switches. If it doesn't fit on the Nexys4, then one of the newer boards from you guys will be in order. Before learning about MicroBlaze I had supposed I would use SPI or i2c over to a separate microcontroller board. This way I ought to be able to fit it all on the one board.
  2. I did see that after I replied. "futzing" underway. -- the last couple of tests I did doofus here programmed the FPGA, but then forgot to actually start the application. <8) . Update: The application will run fine with all of the data related segments assigned to the cellular RAM, but the instruction segments (.text, .ini and, presumably .fini, though I suppose that never actually runs) seem to insist on being in the BRAM. With that, my BRAM is down to 128KB, which is perfectly acceptable - even 256KB would be OK (The lwip echo server application currently needs about 79KB in the instruction space). In the linker script generator, on the Basic or Advanced tab it is unwilling to map the Code sections into anything other than BRAM. When I tried forcing .text into the cellular RAM in the linker script editor, the application did not run. I think I know what is going on. In the tutorial (non DDR version) In step 5 of that tutorial there is no AXI memory interconnect block specified. In the associated diagram, the two ports on the MicroBlaze, M_AXI_DC and M_AXI_IC are left unconnected. (In my case, where I was using the Microcontroller preset, those cache connections are not present at all, of course.) Curiously, by step 6.3 in the tutorial there is an offer, in the connection automation, to add an AXI memory Interconnect, but the instructions say NOT to. HOWEVER, by step 7 in these same instructions, and AXI memory interconnect IS present and connection to the aforementioned ports on the MicroBlaze, and the AXI EMC block connection to that memory interconnect instead of to the peripheral AXI interconnect. Indeed, although the graphics are fuzzy, it looks to me like that AXI memory interconnect already present by step 6.4. So, in essence, it looks to me like step 6.3 and step 6.4 are inconsistent in that (old old) tutorial. So, by step 6.9 that AXI memory interconnection is there, and thus it is possible to map the instruction space to axi_emc_0 - the interface to the cellular RAM. So, when I get to the address editor, the interface to axi_emc_0 under Instruction is just not there, because there is no path to it. (See attached image) My guess is (I am too lazy / preoccupied / busy / whatever to test my hypothesis) is that the MicroBlaze won't use the the AXI peripheral interconnect path to cellular RAM for instruction space without a cache, and the cache would require the addition of the AXI Memory Interconnect block. (I'm actually surprised it is willing to do that for data, as well, but that clearly works.) I looked briefly for confirmation of the hypothesis in the MicroBlaze documentation, but didn't find it in a 10 minute search.
  3. Thanks again for the tip. I think I didn't get that right because 1) the name in the SDK supplied with Vivado 2018.2 is slight different than the tutorial - these things happen - (axi_emc_0_MEM0_BASEADDR_Mem0 in 2018.2 vs. axi_emc_0_S_AXI_MEMO_BASEADDR and 2) the address editor did not show axi_emc_0 under the Instruction space, just under the Data space. Perhaps that was because I selected the Microcontroller Preset - I want to minimize the footprint as my project will need a lot of LUTs. And then, when I went to step 11, I did have that entry there (albeit with a slight different name), so I supposed it was OK. I clicked there, where the two memory regions are listed at the top, but that doesn't actually do anything. And I didn't notice the entries under Section to Memory Region Mapping - which is what *really* matters - and missed that it differed from the tutorial. Those have individual pull-downs to change which memory is used, but I didn't spot those pull downs until today when I clicked on one (the pull-downs were invisible, after all. ). So, tomorrow or so I'll play around in that area, and see if I can change them all, or perhaps all but .text So basicaly, I got confused and wasn't paying enough attention. Thanks a bunch for the tip, because I will have a need for about 100KB of RAM or some sort for my project (with the ability to preload maybe 10K of it), so learning how this all works and how to steer clear of the memory I need for the Microblaze portin in my PL portion will be useful. (Too bad there are so darn many sections, and there isn't a better way to pick a default memory region in the linker script GUI interface. Of course, that is a Xilinx thing, not a Digilent thing.) I have seen at least one other post of someone else falling into that same trap and using BRAM for this tutorial on the Xilinx forum - I will go back and point that out tomorrow. [PS: I got a kick out of names like .bss still being used, that go back to at least the PDP-11 days. ]
  4. Thanks for your suggestion. It allowed me to understand what that define was all about, which led me to look more closely at my block diagram, and I discovered that instead of routing the axi_timer_0 interrupt pin to the microblaze_0_xlcconcat In1[0:0] block per the tutorial, I had run the UARTlite interrupt over there (the block connection GUI leaves something to be desired.). The code doesn't need that UART interrupt I presume (it probably just polls it), but does pretty reasonably insist on the one from the Timer block. So, that took care of that issue. Progress. The SDK make is now failing as it has overflowed the allocated BRAM by a pretty substantial 700K+ bytes. That I can probably figure out on my own (may have to replace the BRAM with the cellular RAM I suppose, if it isn't just something mis-specified somewhere.) Thanks again - all working. I had to adjust the BRAM up to nearly max (512KB - BRAM is 4860Kb or 607KB, more or less) and adjust the lwip PBUF down from 256 to 64 to get it to fit. (Also, the STDIO tab described in the tutorial is no longer present. Instead, one should use another terminal client to access the UARTLite virtual com port.)
  5. Vivado 2018.2, Windows 10, 64 bit. I am trying to follow the example provided by Digilent for their Nexys4 development board (which has no DDR) for generating the LWIP Echo server. I first had an issue where the BRAM that was recommended (64K) was too small - bumping that up to 128K for dlmb and ilmb fixed that. I also found and fixed an issue in lwip202_v1_1 file xadapter.c as well (thanks to a forum post on the Xilinx website) However, now I have one I can't figure out, probably due to being a complete newbie with respect to IP (other than getting the UART/button example working fine). The block design has an AXI_TIMER block, a Concat block, and an AXI Interrupt Controller amongst other things with a Microblaze processor. This generates a platform.h with the following lines (the #define PLATFORM_TIMER_INTERRUPT_INTR is all one line.) #define PLATFORM_EMAC_BASEADDR XPAR_AXI_ETHERNETLITE_0_BASEADDR #define PLATFORM_TIMER_BASEADDR XPAR_AXI_TIMER_0_BASEADDR #define PLATFORM_TIMER_INTERRUPT_INTR XPAR_MICROBLAZE_0_AXI_INTC_AXI_TIMER_0_INTERRUPT_INTR <<<<<<<<<<<<<<<<<<< This one #define PLATFORM_TIMER_INTERRUPT_MASK (1 << XPAR_MICROBLAZE_0_AXI_INTC_AXI_TIMER_0_INTERRUPT_INTR) And platform_setup_interrupts() has these lines: #ifdef XPAR_INTC_0_EMACLITE_0_VEC_ID #ifdef __MICROBLAZE__ XIntc_Enable(intcp, PLATFORM_TIMER_INTERRUPT_INTR); <<<< USED HERE #endif XIntc_Enable(intcp, XPAR_INTC_0_EMACLITE_0_VEC_ID); #endif The problem is that XPAR_MICROBLAZE_0_AXI_INTC_AXI_TIMER_0_INTERRUPT_INTR is not defined anywhere. (I tried various substrings to see if maybe one of the underscores was supposed to be "|", but came up empty). Any thoughts? TIA. JRJ
  6. Thanks for the offer, but I think I will pass on it. I have tons and tons of T-shirts, and no room even for the ones I have.
  7. The Digilent RS232 UART reference component (downloads found on the PmodRS232 and Nexys2 pages, at least), does not do what many (including myself) might expect. If one looks at the documentation, one can see on page 4 that it expects to see 8 data bits PLUS a parity bit. That is quite unusual - most devices sending data over RS232 send a TOTAL of 8 bits (not including the stop and start bits) -- either 7 bits plus parity or 8 bits with no parity. Now, if one doesn't bother to check the parity, and does not send characters one right after the other, or if one includes two stop bits (this last one I have not actually tried), it may appear to work fine. The first stop bit gets interpreted as a parity bit, and the second stop bit (or the empty time between characters) works just like a stop bit - though if one is using parity, parity errors are likely to be raised by the reference component. BUT, if one sends two characters one right after the other, then parity errors and/or framing errors are quite likely. Also, if one is only expecting 7 bits of data, one might be surprised by the extra bit. Its not a huge deal, and not difficult to fix (one can just rip out everything having to do with parity, and just take in 8 bits with no parity, for example [or do the parity computation once you have all 8 bits], or add an interface flag to tell it whether you want 7 bits and parity or 8 bits and no parity, and adjust things accordingly), but I thought I would put a posting out here - I spent a several hours debugging the input section of a little 12 bit computer (whose design goes back to 1973) before I caught on to what the problem actually was.