Recommended Posts

@Notarobot,

As to your newly bestowed title as "Prolific Poster". Before you go out and buy a few rounds for everyone at the local tavern in celebration you should look at the little + number below where your picture should be. If this site were run by a certain reputation based anti-virus application you and I'd be silently un-installed. Unfortunately, for those having the least knowledge upon which to base the "worthiness" of anyone's statements what's left is very unreliable. That makes those able to present themselves as being an infallible authority as the most believable. The best salesmen are those who believe everything that they say because they give no tells otherwise. So, I have no idea how this particular rating is calculated, or what it implies, but I doubt that it is a service to anyone who might put stock in it. I've got no solution except to say that if you can't do something correctly, then perhaps you shouldn't be doing it... at least as far as providing reputation goes.

Share this post


Link to post
Share on other sites

The future of FPGA development is here... if you are unfortunate enough to be an Intel (Altera) customer. Warning! The title of this thread is "Rants..." and this post is true to the title. I was just looking a a nifty Cyclone 10 development board. For $1200 it has almost everything one could want. Gen2 PCIe, FMC, 2 SFP+ interfaces, 12 Gbit transceivers... nice!. The development kit User's Guide start off with this:

"The Intel® Cyclone® 10 GX FPGA Development Kit is a complete design environment that includes both hardware and software you need to develop and evaluate the performance and features of the Intel Cyclone 10 GX FPGA device."

And then in the getting started section this:

"The Intel Cyclone 10 GX FPGA is only supported on Intel Quartus Prime Pro Edition. There is no paid license fee required for Intel Cyclone 10 GX support in Intel Quartus Prime Pro Edition."

So off you go to see what's involved in getting the tools ( that evidently aren't needed to develop and evaluate the performance and features of the Intel Cyclone 10 GX FPGA device ). Oh, the prime Pro edition costs at least $4000. But it gets better. The PRO edition only supports a few of the many Intel FPGA devices. Need to develop for Cyclone 10GX and Arria V? Then you also need to buy the Quartus Standard Edition for another $3000 because the "PRO" edition only supports 3 devices. Just Great...

Now, it was clear that when Intel bought Altera things were going to become a lot more painful for small companies and individuals but now we're beginning to get a glimpse of what's ahead.

Goodbye days when anyone could develop a product using Cyclone... that device is now tiered to extract maximum $$$ from whatever tier segment Intel imagines exists out there. To add insult to injury the Cyclone 10 is limited to internal clock network rates of about 300 MHz.

Goodbye days when small companies who needed to develop products that shipped in small quantities, some with low cost FPGAs and a few with high cost FPGAs from multiple vendors, could consider using an Altera FPGA.

Goodbye Altera.

I know that the reason why Intel bought Altera is that someday there will be embedded Intel SOCs with programmable logic but now I don't want to know how much it will cost to develop for those parts.

So the future of FPGA development is this: If you want to play you better have a lot of money to get in the door.

 

Edited by zygot

Share this post


Link to post
Share on other sites

@zygot,

Based upon what I heard from the industry leaders at DVCon last week, FPGA's are not a dead field at all, but needing money to get in the door makes a lot of sense.

The new explosive market for FPGA's is in verifying ASIC logic before tape-out.  The reality is that simulations just aren't powerful enough to keep up with the ultra-high speed chips, and so they are using FPGA's instead of simulations to make certain that your next generation phone, GPU, CPU, etc., works before they actually pay the $M+ to build the parts.  A small $10k (or worse) license fee is just chump change in this new market.

Dan

Share this post


Link to post
Share on other sites

@D@n,

Then again, you haven't worked for a lot of small companies as I have. Most of them don't have the money (non-capital expense ) to spend on development tools. And understand that this isn't a one time expense. It's a recurring annual expense. As an independent contractor I once had to buy annual subscriptions for both Altera and Xilinx... back when each was around $2000/yr. Ouch this is painful. I've worked for a lot of companies who won't invest in tool subscriptions or maybe just one to be shared.

I didn't make the argument that FPGA development is dead... just that the little guy is being cut out of the game. Yeah, you can run a blog and make tons' of money ( next time we meet for lunch you can buy... :> ) using free tools that target a few obsolete parts but if you want to do contract design work or small quantity production you'll need to be able to use whatever vendor's tools and devices the customer requires. 

After years of hearing industry sales pitches I have pretty reduced expectations with regard to the hype. FPGAs have always been used to verify ASIC designs, but not the FPGAs that you or I can afford. The big thing for FPGA's was (still is) bitcoin.  Once the non-physical money industry collapses it will be something else. I know of companies who use FPGAs to gain a millisecond in the high-speed stock trading game.

My point was (is) that you and I can't play in these areas. There aren't going to be a lot of jobs for new engineering grads in these specialized fields. The small enterprise that isn't funded by mega-wealthy investors are being squeezed out of access to technology. And with those small companies will go the jobs. And with the jobs will go the skills. BTW. Go out and apply for a job with a company. One of the first things you will be asked is "describe the project that you did with such and such a device" and that device is usually the largest and most expensive part available. Oh, you don't have experience ( if you haven't completed a project with that part you have no experience)? Goodbye. I've lost count on hpw many of those experiences I've had.

It's not the big companies who pay their way to society ( they make their fortunes figuring out how to extract value out of society ), it's the many many small companies who pay their way, and their taxes ( without negotiating tax breaks ), and provide most of the jobs.

Edited by zygot

Share this post


Link to post
Share on other sites

Long time FPGA applications engineer here. I've been slowly becoming more and more agitated with the state of FPGA and vendor tools over the past decade. Back in the good ol days, you got a FPGA that was a complete blank slate, and you did the entire design 100% (with exception of a few clock and IO modules). It was basically the "assembly code" days of FPGA. It was still a lot of work to make a large complex system, but everything could be broken down into a smaller subtask, and you put the your fundamental knowledge of Verilog/VHDL to work. Over time, you'd end up with your own personal library of HDL modules that you wrote, tested, and proven out, and you could eventually just instantiate them and link them together. Life got better as you matured as an FPGA developer. There was basically no problem that you couldn't handle. Engineers had to be smart to develop their own IP to tackle any situation.

Now, the industry has moved onto the SoC, because people want newer, emerging technologies, components that run at gigahertz speed, hard IP built into the silicon, and 3rd party IP that would, in theory, save you the development time. Nothing wrong with these things in general, if they are done right, they save you a ton of time and effort.

The problem is that utilizing these high tech things has become waaaaaaay to locked down to the vendor tools. It's almost like now you don't write HDL code anymore, but just click around on tools that try and automate everything. They jumped way too fast from "writing assembly code" to something more aligned with "developing on a .NET platform." That would be fine if thee tools actually worked, it'd just be a different way to solve a problem, but the main issue is that the vendor tools suck, are border line broken, and you are constrained to their pre-canned configuration choices. You no longer have the freedom to design an FPGA whatever way you want, to do whatever you want. You are no longer in control. This isn't really too surprising, as being able to utilizing a complex component in a relatively short amount of time means the tools are going to have to do a lot of the heavy lifting for you, as its very difficult to develop these complex interfaces from scratch. Well, I take that back. I think if you are brave enough, you can try to go back to old school code from scratch, but you are still going to have to interface with the hard IPs at some point, and trying to decipher the vendor tools documentation is a nightmare. It never really reads like an ICD, but rather a marketing spec sheet. Reverse engineering the vendors low level code is impossible as well, as its automatically generated, contains no comments, and reads like garbage. It's obvious the vendor tools had no intention of doing anything except point and click, which means utilizing the tools is going to be very dependent on the user guides, tutorials, and support....all of which are terrible.

For one, the user-guides/tutorials/walk-throughs are all version specific. Trying to take the ONE tutorial for doing something complicated, follow the instructions step by step on a newer version of the tool, and you are almost guaranteed to come to a crashing halt as you get some extremely obscure, impossible to debug, error that contains auto-generated labels and never really tells you what the source of the problem is (it's always more of a "here's a symptom of the problem" type of message). And the vendor never bothers doing regression testings on these old walk-throughs, and making version specific updates. You get one version of the walk-through that works with exactly one version of the tools and that's it. So if you run into problems, can't you just go to the forums and ask for help? 

Second, the vendor staff that interacts with the forums are under-skilled for their jobs. They are basically just script readers. They are trained to identify key words, and try to find an "answer record" that is related to that key word. That translates to 9 times out of 10, they don't really understand your problem, and the "answer record" they present you basically has no relevance, helpfulness, and you'll be scratching your head wondering how on Earth they thought that answer record had any resemblance to your original problem. I've had times where I was sent a link to the exact same user-guide that I was originally having problems with (and stated that I had problems with).

So now I'm stuck in a world where I'm forced to use broken tools, IP that are complete black box mysteries, and tutorials that don't work. It's a miracle to get anything working and tends to take me a loooooooong time, something that brings me a lot of grief from management.

And by the time you finally come around and learn and tricks and pitfalls and you are finally able to start being productive and making these newer FPGAs do stuff with relative ease, the vendors will completely pull out the rug and come out with a brand new software tool for you to relearn everything. You wont be able to use the older tools on newer FPGA parts, and you wont be able to use newer tools on older FPGA parts. Once again, your hands are tied.

 

Share this post


Link to post
Share on other sites

And they call it progress...

As a tiny bit of good news, inference allows some vendor-independent coding e.g. mapping of multipliers and block ram with the necessary pipeline registers. Well, to be honest, I haven't put it to the test going from X to A or vice versa.

But to be honest, I think the hard-macro approach is the only way to go if performance matters. An FPGA is not a "standard cell" ASIC and those two worlds are drifting further apart.

Share this post


Link to post
Share on other sites

@sittinhawk,

Waayy too locked down?  Buggy too.

It's not a total loss.  I'm still using Verilog.  The Borg has not gotten to me yet.

Once you got off the ground, Verilog never had these problems.

Dan

Share this post


Link to post
Share on other sites
Posted (edited)
On 1/5/2019 at 11:45 AM, sittinhawk said:

Back in the good ol days, you got a FPGA that was a complete blank slate, and you did the entire design 100% (with exception of a few clock and IO modules).

Well the good ol' days are still here... if your sources are all HDL. FPGA vendors have always used soft-processor IP as a way to entrap users into a dependent relationship with their own tool sets. To a degree this is understandable. What isn't understandable is why they have to break your hardware and software efforts with each new tool set version release. I suppose that the Open MIPS initiative is a reasonable work-around for those not wanting to be tied to one vendor. I haven't tried to implement a MIPS core in any of the FPGA vendors devices. If you have to have a soft-core processor then I suppose that the approach by @D@ncould work. Personally, I like the ARM based FPGA devices. Both Intel and Xilinx have made using these devices more difficult than necessary for the same reasons that they make using their soft-core design flows unduly painful; and for the same reasons that they make using transceivers ( particularly for low end families ) difficult and for the same reasons that they make using certain IO interfaces like Ethernet and UARTs more difficult than it should be. Intel (Altera)  has imposed it's will on virtually every design using an Ethernet PHY by coercing it's third-party vendors to make sure that a device like the  88E1111 gets configured out of reset into a mode that is almost impossible to use without being dependent on it's own MAC IP. Even HSMC boards with an GMII interface program the PHY to be in RGMII mode. There just isn't a justification for such behaviour.

My approach to design that have to have an embedded processor ( ARM cores ) is to use Block RAM, GPIO or other known simple interface in a minimalist block design and instantiate the whole thing as a component in a larger HDL design. For the most part this has worked out well for me ( relatively speaking ). I can use a high performance AXI bus interface between the Block Ram and the ARM PS and a much simpler interface to the toplevel HDL entity. I don't need a Wishbone  bridge or any other kind of bus bridge to get data between the PL and the PS. Unlike D@N I don't have a problem with overly complicated buses like AXI, as long as I can constrain the usage. If you want a high performance interface between your PL and the PS why not use a bus that the core understands? This does not mean that your own IP has to use a complicated bus.

It is true that the newer high performance ARM based devices have gotten ridiculously complicated. The basic demo design for the ZCU106 is completely dependent on IP that isn't free so it's no guarantee that you can complete any particular design using the basic capabilities of a particular device if your application has to be free from paid IP costs.

There are more alternatives to the pain imposed by FPGA vendors for those wanting to have a processor based FPGA design. I've used the Delfino DSP devices with my FPGA designs using SPI, UART and bus interfaces. All you need is a Delfino module and a simple interface board. I've also connected x86 single board computers to my FPGA boards using similar interfaces and Ethernet or USB. No one has to be enslaved.

Edited by zygot

Share this post


Link to post
Share on other sites
20 hours ago, zygot said:

My approach to design that have to have an embedded processor ( ARM cores ) is to use Block RAM, GPIO or other known simple interface in a minimalist block design and instantiate the whole thing as a component in a larger HDL design. For the most part this has worked out well for me ( relatively speaking ). I can use a high performance AXI bus interface between the Block Ram and the ARM PS and a much simpler interface to the toplevel HDL entity. I don't need a Wishbone  bridge or any other kind of bus bridge to get data between the PL and the PS. Unlike D@N I don't have a problem with overly complicated buses like AXI, as long as I can constrain the usage. If you want a high performance interface between your PL and the PS why not use a bus that the core understands? This does not mean that your own IP has to use a complicated bus.

@zygot,

This is very fascinating, thank you for sharing!  I had been wondering how others had managed to simplify the ARM+FPGA design methodology.

I used a different approach on a Cyclone-V.  I created a bus bridge that I connected to the CPU.  Since it was a Cyclone-V, I could connect from an Avalon to a WB bus with library code connecting the ARM's AXI interface to the avalon interface.  I had two problems with this approach.  First, any bug in my Avalon->WB bridge would halt the ARM hard--requiring a power cycling to correct.  This left me with no effective means of debugging the problem in hardware.  ("No effective means" is a reflection of the fact that I never tried to use the vendor's JTAG scope interface...)  I wouldn't have found the problem if I had not managed to create a simulation of my design--not cycle accurate, mind you, but enough to find the bug.  Second, the throughput was horrible.  (I was using the "low-speed" bus interface port.)  Because of the difficulties I had, I wouldn't recommend such chips to others.  Perhaps your approach gets around those difficulties?

Second, I'd be curious to know if you knew about the bugs in Xilinx's demo AXI-lite peripheral code?  It seems as though the interface is so complicated, even Xilinx struggled to get it right.  Score another point for using formal verification.

I'm still hoping to return to this problem in order to create a full AXI to WB bridge (burst support, ID support, etc.).  So far, however, the interface protocol has been so complex that I have yet to be successful at it.

Thank you again for sharing,

Dan

Share this post


Link to post
Share on other sites

@D@n,

I read your reference about using SymbiYosys to exhaustively check Xilinx's AXI-lite demo code. Very well presented and detailed. You might make me a reader of your blog with presentations like this. Since I'm not a 'native' Verilog user nor do I have any experience with the tools that you use it will take some effort to fully grasp what it is that's going on, though I do see the big picture. I agree that complicated bus standards like AXI that try to encompass all possible transaction scenarios are overly complicated... and limit my use of them to the extent possible, or at least what makes sense to me. Why have all of that logic when you don't need it? In logic design a minimalist elegant design does a lot more than make your source code shorter.

If nothing else you've convinced me that I need to build up my Verilog confidence so that I can try out your tools and fully grasp what it is that you are saying. Perhaps it's me but reading about other's experiences isn't sufficient to gain understanding... I have to do it for myself.

As to finding flaws in demonstration code provided by the big FPGA vendors I've stopped being surprised and horrified long ago.

Anyway, everyone who thinks that they understand FPGA development ( or what to get a realistic perspective ) should read the reference above. This is complicated stuff and just when you become complacent about your understanding of any subset of the design/verification process someone will come along and knock that illusion into the distance.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now