Jump to content
  • 0

Which kind of projects tend to need much PL? (which Zybo to choose)


yottabyte

Question

Hi,

at the moment I'm pretty sure my board of choice will be a Zybo Z7, but I'm still not sure about which one of them. If a bigger FPGA is needed depends more on the kind of projects rather than the size itself, that's why I can't just simply extrapolate, how much logic I will probably need.

Could you please describe me somehow, for which kind of projects the PL of the Zynq-7010 could be too small and the greater Zynq-7020 would be recommended or needed? Or could you maybe give me some examples of projects, which wouldn't fit into a Zynq-7010?

Zynq 7010: 28K logic cells

Zynq 7020: 85K logic cells

What sort of projects I will do... I don't know it yet at all. To get into SoC/Zynq at all will keep me busy for a while. After this... something which takes advantage of parallelism would be nice. RT-image-processing, neuronal networks, tree spanning algorithms...

 If I want to experiment around in different fields, will I most probably time by time get into situations, where the bigger Zynq would be of advantage, avoid problems or even allow an implementation at all. Or is the bigger Zynq-7020 more like of industrial size and a single, private hobby- or student-developer won't ever come close to fill out even the smaller Zynq-7010?

Thank you a lot!

Link to comment
Share on other sites

5 answers to this question

Recommended Posts

Hi @jpeyron,

thanks for your answer. This is exactly information I am looking for.

To be honest, first thing I thought was: Of course it's a Digilent-demo which doesn't fit on the bargain-board... ?

I'm just kidding (better to be mentioned, the smileys are really small here). Anyway, this is interesting. What does make a project like this so voluminous? Is the reason something like, that there are equivalent modules for every pixel?

Until now, it doesn't seem to be recommended to save the 50€/60$ difference of the Zybos - even for people it's painful for. This example doesn't seem to be very exceptional...

Link to comment
Share on other sites

Hi,

on PL, you'll find the air gets thinner when you run out of resources in a sense that P&R slows down, and it gets harder to close timing. This is because lack of resources adds additional constraints, compared to a sparse design.

Most likely, you won't run into space constraints if you write the logic from scratch - If you manage to use up tens of thousands of LUTs with code that was hand-written from scratch by a single person within a few weeks, chances are high there is something fundamentally wrong with the architecture or methodology. This is meant just something to think about, it's easy to come up with counterexamples e.g. highly parallel designs like bitcoin mining or a DIY GPU).

On the other hand, clicking through a few IP wizards will easily create something huge. This isn't surprising if I keep in mind that FPGA vendors sell silicon by the square meter...

For me, the most likely bottleneck feature is BRAM, because keeping memory accesses within the FPGA, within a single clock domain and at a constant small latency may dramatically reduce complexity. It also provides crazy memory bandwidth but you need to re-think the algorithm around a few corners, e.g. in the neural network you mentioned, dedicated BRAMS for input / hidden / output layer, weights, biases, MACs for parallel computation of row-column products and a mux or two for the data routing (mild understatement...)

Thinking back, my first "serious" FPGA adventure was to put a lua interpreter on a softcore processor. It felt like "wow -wasn't this too easy?". The next day I found this to be correct, realizing that I was using up most of the BRAM of a USD 7k part...
On the other hand, +$50 for a low-end FPGAs can be the difference between a one-day hack and something in need of a project plan.

If you're serious about FPGAs, saving $50 isn't worth it, at any level of competence.

On the other hand, if you're not sure and just want to try, there's nothing wrong with getting the cheapest board. Just consider it disposable (and, having a cheaper board I'm not afraid to damage may sometimes even be the better choice)

Link to comment
Share on other sites

hi @xc6lx45,

oh, thank you a lot for explaining me these facts - very interesting and of course conclusive. I will order the bigger Zybo-Z7 from trenz next week. In this case saving 50€ is obviously not worth it.  I already put some stuff on ebay for this. I'm really glad I didn't find out about these things afterwards.

?

Link to comment
Share on other sites

just to warn you if the money is important to you: The learning curve is much (much much) steeper than most people anticipate. FPGA alone is a brain-twister, we're used to sequential programming when an FPGA operates in parallel. Combine that with a state-of-the-art CPU - most people aren't used to situations where documentation alone is measured in shelf-meters. Just be aware that I can spend working months just learning, and my mental "map" is still largely blank spots (for example, ARM security is a huge topic one needs to be at least aware of).

If you have the patience, it may be the most efficient $100 you have ever invested into your career.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...