• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by zygot

  1. @JColvin So... I looked at both schematics again and was surprised to see that the same Rev B.1 schematic displays differently on Win7 using Foxit Reader than it does in Centos using the native Gnome poppler application. If I drag the cursor over the empty blocks for IO Banks 34, 35 and 16 while holding down the left mouse button on Centos the hidden text appears in a blue background. Thanks (really!) for forcing me figure this out. I guess that PDF files aren't all things to all people.
  2. @eric_holtzclaw, If you need to use an external clock as part of an interface there are pins that are appropriate. Unfortunately, the current version of the schematic for the CMOD-A7 doesn't show the pin names for all of the IO banks connected to PIOxx pins. Any _MRCC or _SRCC pins will work. Absent a customer friendly schematic you can find pin names for all Xilinx devices in UG475. According to an older version of the CMOD-A7 schematic module pins PIO36, PIO46, PIO43, PIO37, PIO18, and PIO19 are on MRCC type pins.
  3. I use Centos 6 with ISE 14.7 and Adept without issues. I don't have an answer for your problems with VirtualBox hosts but I will point out that Centos 6 uses Linux 2.6.32 which has fundamental differences from modern Ubuntu releases based on later Linux kernel releases. It wouldn't surprise me if this is a problem for USB drivers.
  4. Ok... possibly... probably not to write a driver for Linux on a dual boot Windows/Linux machine... It depends.... C# has worked for me for multi-threaded USB applications in terms of speed. I don't know if you can even write a Windows driver with it. (I gave up trying to write drivers for Winwoes long ago as half of the versions have absolutely no purpose other than to create revenue... vista...m-u-s-t... c-o-n-t-r-o-l... s-e-l-f... The problem is that I write application code so infrequently these days that I never fit into the "right hands" category any more.
  5. Oh my, I've trying to keep myself from posting to this question for a while now and just failed to resist. The only purpose for asking such a question without context is to gender ill-will with your pals while having beers at a bar. [OK, it might also be a test question to see if you've been attending classes... shame on your professor] The following commentary might be a bit more useful. I suppose that you really want to know if you should choose learning one over the other to create an application that has a higher performance. This question, while a bit more defined is not all that much better at stating a context. It depends on the application, it depends on the target, it depends on the compiler, it depends on the optimization choices, it depends on the third party libraries that your application uses, it depends you how you implement your algorithm, it depends on your skill at understanding the algorithm and your language of choice, it depends on what you've had to eat in the last 12 hours, it depends on what time it is when you write your code, etc, etc. It depends on what you mean by faster. Do you mean faster time to develop a robust application? Do you mean faster time time to execute one particular algorithm on the same hardware? Do you mean faster time to execute all code on the same hardware? Just for the sake of argument, let's assume that C generally results in faster code execution times for a particular application on particular hardware that doesn't use any third party libraries. Would you think that you can't write poor code in C that performs slower than better code in C++? Is executing a particular algorithm in 25 us better than executing it in 24us? Maybe. Probably not. Do thou think that being expert in one language will make all of your work so much faster than if you chose the other a marketable feature? I wouldn't bet on it. Besides, usually customers want something that works reliably and accomplishes something rather than runs really fast and sometimes accomplishes a task. Here's my answer and I bet that it's true for any context; "It depends". Back in the days when i386 was a hot microprocessor you could get 'hand-crafted' assembly libraries to perform certain tasks, especially for video ( back before there were GPUs and video was pretty much just buffer ). A lot of the techniques that have made the modern microprocessor faster ( and inherently unsafe ) has rendered spending time crafting such libraries useless. That doesn't mean that you can't add inline C or C++ code to your application if it makes sense. Of course the PC is but one (perhaps I should say one universe of) hardware platforms for which you can write applications. For a PC your 'fast' application might not be so notability fast running on any particular OS.
  6. Well, this statement is correct but... what would happen if you operated the HCSR04 at 3.3V? I've done that with pretty good results up to a few feet. The sensor has a small micro that seems fine with a lower operating voltage. I haven't tried to characterize the module at 3.3V as far as accuracy near the long distance of its range. There's no harm in experimenting as long as you supply 3.3V to the sensor power supply pins. Though somewhat limited these are cheap and nifty little sensors to lay around with. It would be swell if there was a version that allowed more control over the transmit characteristics and presented an different output signal, but hey, I encourage you to go ahead and play with them.
  7. As my previous post was getting a bit long-winded... I hastily cut it short forgetting the most important part of any introductory course in modern digital design or for that matter any self-guided journey into the subject. That would be verification, and at the least include some exploration of simulation and writing effective testbench code. Verification is really part of the design process but is complex enough to deserve it's own special discussion.
  8. So @Shaw, forgive me if the following commentary seems like a lecture; it's just some hastily collected random thoughts that seem to be relevant. Since I have no idea what the goals for your course are these thoughts are not particularly directed to your situation.... more so for any beginners, not taking a formal course, who happen to read them. I understand how difficult it is to construct a stand-alone course and present material that is relevant to current technology and has a depth that is useful. More decades than I care to admit I took an undergrad course in digital design that was mostly just the basics of logic design concepts and reducing complicated logic expressions to minimal form. Nothing wrong with that as an introduction to the material. There was no mention of actual LSI or MSI logic devices that existed at the time, or even a hint about analysing real physical phenomena in the coursework. Again nothing wrong with that, except for anyone finding themselves doing this as a profession. In those days companies had a commitment for training young engineer hires and provided mentors to help guide them for the real education. This is how I learned digital design, from doing, from example, from critical reviews of my work and from expectations that I provide proper analysis of why I thought that my work was worth integrating into a project. I suspect that those days are long gone for most engineering graduates. Again, I understand that introductory coursework can't replace actual training for creating marketable skill in most technical fields. These days almost no one uses those LSI and MSI components to do logic design; we have much more flexible options in the form of programmable logic devices. Most of the concepts that I learned in my digital design course were not needed, to a large extent, in my education on designing real circuits with real components on real materials for real environmental conditions. Even more so since FPGA devices became normative. Still, that one course in logic design might have been a bit more realistic in terms of doing actual digital design. This bring me to what I'm trying to say. I suspect that most readers of this user forum are trying to learn this stuff for themselves, with or without a few text books for guidance. I suspect that very few even consider getting a text on digital design because FPGA is generally presented as a software development enterprise. I learned quite a lot from poring over the few good application notes and texts offered by logic vendors, particularly emitter coupled logic, but if you don't have the old paper versions these tomes are long gone ( along with the devices ) or at least very hard to come by in a different form. Since the FPGA has become the de facto platform for digital logic design the distance between those basic logic concepts in the course I took to what's needed to be competent has widened significantly. Now, in spite of the fact that all those physical boolean logic devices have been replaced by look-up tables I have a significant advantage in being able to use those FPGAs than someone starting from the same position that I was when I got my first job because of years of training, from sources that no longer exist for the most part. The FPGA newbie ( including anyone taking an introductory course in logic design ? ) has a few obstacles to overcome: The perception that FPGA design involves learning a "digital design language" like VHDL or Verilog and that using them is just like using a software language like C or Rust.. or worse that it can be done using a GUI and cans of IP. The perception that FPGA design doesn't require the same kinds of analysis and understanding of digital design basics and real device physics that I had to learn on the job. The perception that digital design with FPGAs is independent from the actual architecture, resources, and development tools that vendors provide in order to implement a design. Fortunately, Xilinx does offer a lot of information that can help with using the devices, tool, design source flows such as HDLs. For those taking introductory courses I hope that they are at least introduced to the complexities of devices and tools beyond simply writing "code" modules illustrating a few basic concepts. It's easy to get a badly simplistic view of digital design without some knowledge of the details. For anyone trying to learn this for themselves, without a supporting group, your workload is heavy. The beginning starts with the material offered in the Xilinx Document Navigator in the form of user's manuals, reference manuals, application notes, and more. Without a good foundation in designing logic for devices and technology used in the '80s grasping, even finding, the necessary basic concepts will be hard work. I realize that there is a broad spectrum of people who want to use FPGA devices, from those who need to have a good understanding of everything to those who simply want to treat the FPGA as a black box; so I can't offer guidance to suit everyone. I do feel comfortable suggesting that if you are like most people intending to do substantial FPGA development that you can't afford to ignore any of the components. Learn basic logic design and timing analysis. Learn how to understand the AC and DC data sheet specifications. Learn an HDL in the context of digital design. Learn the vendors tool flows. Learn the device architectures and limitation. There is no deadline. There is no finish line. There's just training and retraining. To borrow from a quote I recently ran across attributed to Mark Twain; "Don't let schooling get in the way of your education".
  9. Well you've just discovered a few of the reasons why this is a bad idea. Aside from the ones that Dan mentioned there are plenty more. Having said that, there is a case for creating signals that are used as edge-triggered clocks by external devices on IO pins. For low speed interfaces like I2C, SPI, etc this is fine. You do have to make sure that the positioning of the clock edge used by the external device ( rising or falling ) has a large enough window on either side of the imaginary edge transition where the data is not allowed to change states to account for all manner of slop which includes, among other things, place and route timing delays (which you should expect to vary from build to build), varying delays due to the logic that creates your data, etc. In this case Vivado is totally unaware that your 'clock' signal is a clock. Dan is correct that Vivado and the FPGA devices that it works with assume that clock signals are special and not the same as other signals in a design. All of the timing analysis and steps in synthesis and place and route depend on this assumption. As you should suspect doing this only works well for SDR data interfaces and clock periods that are much much longer than the total of all delays that could be encountered in creating the signals ( the actual analysis is a bit more complicated than that...). Some external devices might be sensitive to jitter and even at low clock rates render this technique unusable. Also, if this isn't clear from what I've said so far, you can't used these signals internally to the FPGA. For high speed interfaces you need to play according to the rules and make sure that all clocks are sourced by an external high quality clock module, or derived from one using one of the available clock modules that all FPGA devices now have. This is because there just isn't enough control over delays to account for the worst case variations in when signals transition relative to each other for even simple logic implementations. I'm assuming that you've made a bigger and more common beginners' mistake by not reading the Xilinx literature and understanding FPGA architecture and resources or how the tools work. Assumptions are deadly in FPGA design and development. Vivado will not fix your mistakes in either conception or logic so you need to have a good understanding of all of that before writing your first module. I'm curious, why did you decided on SystemVerilog as your HDL rather than one of the 'older' standbys like Verilog?
  10. There are a few things that you don't mention.. like what board you already own. I might be wrong but from reading your list of requirements I'm guessing that this list will change once you have a basic design working. You don't mention a constant data rate or latency requirement. These are the kinds of things that you should spend time anticipating up front. It's rarely satisfying to get 80% of a goal accomplished with no way to get to 100%. This is one of those projects where you can build a solution before you actually spend money since there are a variety, depending on unmentioned details, of boards available that might suit your needs. There is certainly no harm in picking a board supported by the free Vivado tools and building and simulating your design without actual hardware. You can always spend your money once you have a pretty good sense that a particular board works for your design. In my experience there are two ways to develop FPGA projects. One involves a limitless budget and time expenditure with many restarts and the other involves some planning and creative approaches to getting more from less. You can probably guess which one I'd recommend.
  11. Welcome to a new world. My personal opinion is that your software background will, in the long run, be an obstacle to getting proficient with FPGA development. FPGA vendors would like you to believe that you can use their tools and create any kind of project that you want without having to learn the details about logic design and modern programmable logic tool design flows. To an extent, as long as you are content to view painting in coloring books as art it is possible to believe that this is correct. In the long run though you will find that there is a lot of conceptual challenges and re-learning to do in order to be able to create personal expressions of FPGA art that accomplishes meaningful objectives. So I'm in the camp that will advise you to learn an HDL like Verilog or VHDL, preferably in a formal or quazi-formal way to start. There are other approaches like System C but I'd recommend those for people already expert in one of the tow previously mentioned. My background is hardware design branching into software with a few decades of FPGA development. I mention this because it's easy to confuse software development concepts as the same as FPGA development concepts and this will be a problem. Not everyone will agree with me on this but my opinion is not just speculation. As for Open CL this is one of those 'platforms' that tries to make integrating FPGA development into software development a seamless experience. Again I don't believe that such an animal exists. As has been pointed out in the previous post be aware of the costs for using Open CL. As far as I know you will need to buy a licence to use Open CL with Xilinx tools. Intel will extract a significant annual subscription fee to even develop with all but the older low end devices and in general tools like MATLAB integration or Open CL cost extra. Xilinx is willing to sell you a node-locked license to work with a specific device supporting a particular board, and that license is permanent as long as it is used with a particular version of the tools. Intel is not that generous and you might even find yourself having to purchase more than one version of Quartus to develop for two different device families. There was a time when Altera would sell you a USB dongle that allowed using a licensed version of Quartus one any machine in perpetuity but this is no longer the case. As a rule Xilinx and most other FPGA vendors will allow anyone to work with any device without spending $3000/yr on tool licenses but Intel will not. Intel is just more selective about potential customer relationships in terms of financial resources. A generally universal rule of technology is that you pay for convenience or simplicity and the cost (money being the least problematic part of the cost) is usually higher in the long run than you would initially ever agree to. In FPGA logic we use a logic simulator to test logic designs, not emulators, and they support Verilog or VHDL very well ( Verilog better than VHDL because Verilog has better support for simulating heirarchal designs than VHDL ). You needn't, I'd argue shouldn't, need hardware to get started with FPGA development; just the free version of Vivado or Quartus or whatever other vendor tool is required. Be aware that simulation is in itself a skill requiring some expertise in an HDL and logic design. I personally do, and advise everyone else, to try building a project, to the extent possible, using the available tools and demo projects for a particular board before getting committed to it by spending a lot of money. By 'extent possible' I mean that if your board has a device not supported by the free version of the tools you will not be able to do much. At the least this means that you can't create a bitstream to configure a device ( you don't care because you don't have the hardware to configure anyway...) but often you can't even simulate or get through synthesis much less place and route for an unsupported device. The bottom line is that before spending money do your 'due diligence' in researching what you are getting yourself into. You can do this without Open CL using a board with both a PCIe interface and Ethernet PHY and all Verilog or VHDL source code. I've done this using a Cyclone V GT board ( which happens to be supported by the free Quartus tools and is a very capable but not terribly expensive board ) as well as the KC705 board and a few others.. It is not a good project for a complete FPGA newbie. If you have more money than you know what to do with you can probably do this to satisfy your initial needs using Open CL or other platform without having to become skilled in an HDL or FPGA tool flow details but it isn't something that I'd want to try, knowing what I know about FPGA development. About the part where the FPGA stores data in the PC RAM... current FPGA devices will require a software driver to pass data through a PCIe or USB interface. Intel recently announced a new Agilex family of devices with CXL that might someday allow just what you want to do... but I don't envision having the money to get past the bouncers guarding the door to the dance so that I can try it out for myself. You can also do the same thing using a board with an Ethernet PHY and a USB 3.0 interface by the way and it could be fairly easy to do if your Ethernet traffic is mostly one way. [Added thought] Many year ago Altera announced that it was developing an optical interface as a alternative to those messy, costly, slow, limited IO pins. My second thought after "WOW! how cool is that?" was gee it won't be long before Altera gets bought out by the biggest, baddest ape in the jungle. It seems that the technical challenges to such a scheme were more than Altera could resolve but my second thought turned out to be prophetic, if somewhat inaccurate. I'm still hoping for that optical interface to show up one day.... a guy can dream can't he?
  12. While not wrong, I'd caution that this advice might be overly optimistic. There's a reason why termination was given it's name; it generally needs to be as close to the source or terminus of a driven signal depending on the type of termination. It's one thing to lay out an FPGA PCB with the smallest available components to implement most inter-standard conversion schemes. It's quite another to get to work... on the first try... when you don't have past experience doing this sucessfully. While it's possible to implement AC coupled termination for such connections it's a risky business for those who don't know how to analyze and understand the design. Starting with a board that has its FPGA pins already assigned isn't going to work in your favor. Finally, if your source transfers data at a rate higher than the reference clock you need to understand all of the issues and limitations of using ISERDES2. None of this is to say that you can't do what you want to do, but you can damage your board and expend a lot of time and effort trying to fix the unfixable. 250 Mbps isn't an extreme data rate for Artix but it isn't trivial either. It's a lot easier to violate AC and DC IO specifications near logic switching events than you probably realize.
  13. Floorplanning is where you start before designing a new board. Once you've assigned pins and created a PCB your options for meeting timing for a particularly complex, dense, and high clock rate design are limited. Of course you will need to have a reasonably 'close to final version' of your FPGA design to start with so that the tools can select the best pin locations. For a general purpose development board like the one you are using only a few interfaces need to be 'optimized' for speed; and of course the speed grade of the parts on the board have a large impact on limiting the performance of any design. It is not always possible to select an arbitrary clock rate for any application for a particular board and always meet timing. On the other hand it's easy to create a design that doesn't have a chance to operate at a desired clock rate when a better conceived design might. Providing the tools with good guidance in the form of constraints is often the key to achieving a particular performance goal, though don't expect Vivado to turn a poor design into a greate design.
  14. @jpeyron Thanks. This seems to be one of those topics that need a special home.. other than a question and answer forum. My formatting is bad and I didn't quite do justice to the topic but I thought that I'd throw it up and see what happens. Any ideas on having a FAQ section for special interests? I'm thinking about newbies, FMC, tools, etc. Some questions see to be repeated fairly often.and might be resolved if people can find answers. I haven't given a lot of time to the idea so far,
  15. I have a few random thoughts on the subject ( is anyone surprised? ) I looked over an old project where just for fun I used a 128Kx32 single clock FIFO built with BRAM. It was for the Nexys Video Artix device which has the same 36Kb BRAMs. it used 116 BRAMs and worked at 100 Mhz with a mid-range speed part. 36Kb/9 = 4096 bytes plus parity, 131072/4906 = 32 9-bit BRAMs, 4x32 = 128 BRAMs to implement a 128Kx32 FIFO so Vivado must have found some way to save 12 BRAMs. If you have 18-bit data that's fine as the BRAMs can be organized as 32Kx9 where the extra bit is meant for parity. From experience I can tell you that using the parity bit for data can get tricky but is entirely possible. If you need a dual clock FIFO then expect to use more BRAMs. If there isn't much else in your design timing won't be a problem. If you are trying to place 116 BRAMs into a complicated high speed design then you will find yourself needing to leanr about timing closure strategies. If I don't have to worry about resource usage, timing issues I'd use an HDL to implement RAM or FIFO structures as it's portable, more or less. I tend to just bite the bullet and use the vendors tools to implement resources like block memory, PLLs, and such as these resources aren't really that compatible between vendors and I usually do care about resource usage and timing. Also, vendor IP 'wizards' sometimes creates constraints for them and take care of a lot of little details that ultimately save time.
  16. Well, power consumption in an FPGA is related to clock rates and output pin toggling rates. So the lower your clock rate the lower the power consumption. Does that mean that your design can run at some arbitrarily low clock rate to achieve some minimum power dissipation? I don't know. It depends. I do remember when there was an effort to commercialize clockless FPGA devices using delays for synchronization and skew management; didn't last long. If low power consumption is the most important specification then there are FPGA families designed for those applications. Choosing the right device for a particular application is part of hardware design. Xilinx Series7 devices do support clock enables so that you can "power down' parts of a design when not needed similar to ASIC devices. You can learn more about optimizing FPGA designs for a particular need by reading the vendors' reference manuals and user's guides for devices and tools. That would be your best option. [edit] Your question is about dynamic voltage and frequency management. Having the capability to do something is one thing but being able to do it is quite another thing. As a purely intellectual exercise I suppose that trying to manage voltages could be interesting but you had better understand the specifications in the data sheet for your device before trying any ideas on hardware.
  17. What exactly is it that you want to accomplish?
  18. I've not been too keen to help debug jrosengarden's code for a variety of reasons but this comment from Dan does beg for commentary. Being overly optimistic or dependent on automated tools is a dangerous way to live. Even if the tools are doing their job correctly there are so many ways to subvert its purpose. There is simply no substitute for using that grey matter that uses so much of the energy that your body produces. Of course training those neural networks to perform well takes time and practice. But it's essential to have the skill to do competent design and verification. Formal verification methods certainly have their place but they are not a substitute for learning how to analyse logic issues. Even seasoned engineers create code that appears to work on first blush but has hidden issues ready to torpedo your efforts. This is why all HDL code needs a testbench. And, as you get more hours of experience finding corner cases of failure your testbenches will get better at finding them early rather than later. Many years ago I was implementing an FEC by re-writing Fortran simulation code into DSP assembly code. The customer was willing to pay for verification so all modules had a software 'testbench'. (The software development process and digital logic development process is very similar in many regards). Anyway, part of the Reed-Solomon design appeared to me to need exhaustive testing so instead of relying on finding errors with a simple software testbench I designed a test to run on hardware. The test rig ran for hours without an error and then all a sudden I started getting an error or cluster of errors but at really low rates. It took a few days thinking about the few errors that I found to come up with a plausible explanation which pointed me to a sometimes used bit of Fortran where there was a simple logic error, probably a typo or cut and paste error. The guy who wrote the Fortran algorithms had done proper testing and not found the error. I had done proper testing in software and not found the error but throwing massive amounts of data at the code in hardware did. Finding the error using software would likely have taken days or weeks of computer simulation. Sometimes you can anticipate the 'usual suspects' and sometimes you can't. Sometimes your automated tools will find issues that they are designed to find and sometimes your problem isn't one of them. And sometimes, even very smart people do dumb things and reach some bad conclusions. The truth is that debugging, whatever it is that you are testing, involves debugging the analysis and thinking of the person running the show. Don't try to take yourself out of the verification process because you are part of the failure both in terms of design and verification. I should point out that rarely do people write HDL modules that are supposed to do all things for every possible application, at every possible clock rate, etc, etc, etc. We write modules that have limitations and restrictions and are meant to accomplish a limited functionality. This isn't an error. Being unaware of those restrictions, limitations and details that haven't been addressed in the code is an error. Not documenting those restrictions, limitations and details is a very costly error.
  19. Sounds like interesting project. In addition to what I've already mentioned don't forget to verify that the FPGA platform FMC traces are suitably length matched for each differential pair. Also be sure that your IO banks have the correct Vcco to support the IOSTANDARD of your requirements. I'd also do an analysis of the FPGA platform power supply. It might be helpful to use one of the application note projects set up as close to your needs as possible as a test and see what Vivado guesses the power requirements are for this kind of application. FPGA development boards are not usually designed to handle bleeding edge applications. Since your data is being driven externally you should be OK but I'd want to have a warm feeling before trying to make hardware work. Make sure that you understand cascaded ISERDES2 limitations for SDR and DDR data. Hopefully your camera can supply DDR LVDS data to avoid a cascaded ISERDES2 implementation. Unfortunately, there aren't many cascaded ISERDERS2 or OSERDES2 cascaded example projects to play with. Don't plan on getting lucky... so extensive and solid prep work before committing to a PCB interface board is essential.
  20. @cucchi I suspect that your only option is the FMC connection. In theory, even the LPC FMC offers enough differential pins to do 16 channels of SERDES. Theory and practice are rarely the same in the FPGA development board sphere. You say that you don't have much expertise with SERDES. That will have to change. You must read the Xilinx Series 7 SelectIO and Clocking user's manuals. Understand what the various clock buffer options are and what the limitations for using them are. I assume that you intend to have a source synchronous interface and you must understand the limitations for clocking SERDES. I don't know of any FMC mezzanine cards that provide 16 channels of differential signalling plus clocks routed to the right pins for any FMC equipped FPGA board that I own ( and I have quite a few ). I have no idea if the Zedboard even routes the FMC signals as differential pairs. If you are prepared to design your own FMC mezzanine card you might be able to use the Zedboard FMC connector... you'll have to trace through the pin and IO Bank assignments to be sure. If you have a low reference clock to SERDES bit rate ratio then IOSERDES isn't too complicated. If you want to do 14X-16X things get complicated. Read through applicable application notes fro Xilinx such as XAPP524, XAPP585, XAPP595 etc for some insight as to what you are getting yourself into. Unfortunately, having an FMC connector on a board is no guarantee that it will work for every application.. it depends on the PCB routing and pin assignments. I would definitely start off with 1 channel TX and 1 channel RX in a loopback configuration as a starting point. Do I understand correctly that you want to use the Zedboard as a data sink for a 16 channel SERDES interface? Knowing what the data source will be is necessary to make intelligent commentary about any specific implementation. Opal Kelly has a couple of boards with Syzygy specification ports. The standard ports support 8 channels of differential pairs but you have to confirm that the differential and clock pins are all on the same IO bank. This might be the easiest way to create a few projects to learn about SERDES. Opal Kelly did a good job with Syzygy. Trust me, you want to develop your skill starting with a simple project and building up to a higher performance design. Your goals are pretty demanding in terms of low end FPGA performance and complexity. How do you intend to process 2 GB/s of data? You will have to have some pretty wide busses somewhere in your design.
  21. So, if I am getting the point of the previous two posts from xclx45 and Dan, make your own filter in Verilog or VHDL and figure out the details ( signed, unsigned, fixed point, etc, .... etc ). You can instantiate DSP48E primitives (difficult) or just let the synthesis tool do it from your HDL code (easier). Debating how things should be verses how they are when using third party scripts to generate unreadable code seems like a waste of time to me... If you don't like what you get then design what you want. If you can make the time to write your own IP ( would be nice to not depend on a particular vendor's DSP architecture ) you'll learn a lot and save a lot of time later. If a vendor's IP doesn't make timing closure for your design a nightmare and you don't have the time to figure out all the details just let the IP handle it. I suspect that trying to optimize the FIR Compiler will be frustrating at best. I once had to come up with pipelined code to calculate signal strength in dB at a minimum sustained data rate. My approach to converting 32-bit integers to logarithms used a combination of Taylor Series expansion and look-up tables. I had a few versions**. One was straight VHDL so that I could compare Altera and Xilinx DSP tiles. One instantiated DSP48 tile primitives for a particular Xilinx device. These were fixed point designs. There's theory and there's practical experience.. they are usually not the same. ** I played with a number of approaches based on extremely limited specifications so there were quite a few versions. Every time I presented one the requirements changed and so did the complexity and resource requirements. I should mention that my intent for mentioning this experience is not to denigrate the information presented by others or to claim superiority in any way. When getting advice it's important to put that into context. A lot of times facts aren't necessarily relevant to solving a particular problem. If I haven't made this clear I've never had the experience that vendor IP optimizes resource usage... in fact quite the opposite. This is why in a commercial setting companies are willing to pay to develop their own IP. Sometimes FPGA silicon costs overshadow development costs.
  22. Xilinx encrypts the FIR compiler code so it's difficult to figure out what's going on with your experiments. You should read PG149 which is the IP product Guide for the FIR Compiler. It does indicate when coefficient symmetry optimizations might or won't be made. It also has a link to a Resource Utilization "Spreadsheet" for some FPGA family devices depending on your customized specifications. Oddly, that bit of guidance doesn't show any usages where significant numbers of BRAMS are ever used. The guide does have a section titled "Resource Considerations" that has some interesting information but not enough to answer you questions. I suspect that you'll have to understand actual resource utilization for your application empirically. (SOP from my experience when I really need optimal performance) It's always a good idea to be familiar with all device and IP documentation though, in my experience, the IP documentation rarely addresses all of my questions when I'm deep into a design and committed to IP that's not mine. Again, my guess is that when you specify a clock rate, sample rate, data and coefficient widths, etc resource utilization increases with increasing throughput. The same thing happens if you pipeline your code to maximize data rates. But I'm only guessing. I don't use the FIR Compiler so my views are an extrapolation of experience with other IP from all FPGA vendors that I have used.
  23. Or Carriage Return and get Shift Out or send a Line Feed and get a Vertical Tab.... just fair warning...
  24. I'm assuming that you've done your homework on CCD sensor data formats.
  25. I was about to suggest looking in the Project Vault where there are a few examples of UART transmitters... and then up popped the Verilog one from xc6lx45. If VHDL is more to your liking there are several in the Vault. The UartDebuggerR3.zip has mine (you also get a testbench example for simulation if you've never done that). I'm not making a negative comment about the implementation above ( it does present the idea in an easy to understand manner ) but some free stuff costs less that other free stuff. While you might only need a transmitter to do the project a full-blown UART with a receiver might be useful for debugging your design. I agree that the UART is the easiest way to get a communications link going between a PC and your FPGA board and is more than adequate for most student projects. Python with PySerial makes creating a serial UART application easy. Have fun! [edit] you can instantiate Verilog modules in your VHDL code and visa versa so find something that works for you regardless of the HDL