Jump to content
  • 0

Feedback on a register file design?


CurtP

Question

Hey everyone,

I've done the initial design of a register file (16x 32-bit registers, two write ports, four read ports) in VHDL as part of a larger project, but seeing as I am a relative newcomer to HDLs, I was hoping to get some feedback on my design, any errors I may have made, or any improvements I might want to make.

Here is the VHDL:

-- Register file

-- Two write ports, four read ports.
-- For performance reasons, this register file does not check for the same 
-- register being written on both write ports in the same cycle. CPU control
-- circuitry is responsible for preventing this condition from happening.

library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.numeric_std.all;

use work.cpu1_globals_1.all;
use work.func_pkg.all;

entity registerFile is
    port
    (
        clk : in std_logic;
        rst : in std_logic;
        writeEnableA : in std_logic;
        writeEnableB : in std_logic;
        readSelA, readSelB, readSelC, readSelD, writeSelA, writeSelB : in std_logic_vector(3 downto 0);
        data_inA, data_inB : in std_logic_vector(DATA_WIDTH - 1 downto 0);
        data_outA, data_outB, data_outC, data_outD : out std_logic_vector(DATA_WIDTH - 1 downto 0)
    );
end registerFile;

architecture behavioral of registerFile is

    type regArray is array (0 to 15) of std_logic_vector(DATA_WIDTH - 1 downto 0);
    signal registers : regArray := (others => (others => '0'));

begin

    data_outA <= registers(to_integer(unsigned(readSelA)));
    data_outB <= registers(to_integer(unsigned(readSelB)));
    data_outC <= registers(to_integer(unsigned(readSelC)));
    data_outD <= registers(to_integer(unsigned(readSelD)));

    registerFile_main : process(clk)
    
    begin
    
        if(rising_edge(clk)) then
        
            if(rst = '1') then
        	
                registers <= (others => (others => '0'));
                
            else
                
                if(writeEnableA = '1') then
                    registers(to_integer(unsigned(writeSelA))) <= data_inA;
                end if;
            	
                if(writeEnableB = '1') then
                    registers(to_integer(unsigned(writeSelB))) <= data_inB;
                end if;
                
            end if;
            
        end if;
        
    end process;
    
end behavioral;

This design is intended for use on FPGAs, hence the use of default values for the registers.

I appreciate any feedback you might have!

Thanks,

 - Curt

Link to comment
Share on other sites

Recommended Posts

it looks fine.  I would normally have a different port order and have each on a different line for easy copy/paste.  I prefer to have interfaces -- readA, doutA, readB, doutB, etc...  I also place interfaces in an output, input, config, infrastructure order.  In industry the order of infrastructure, input, output, config is more common.  

For implementation, this probably infers registers.  It is possible to construct this with distributed memory, although it is more complex.  It isn't clear to me if the added complexity results in a better design at this size.

The design actually will have priority logic for data_inB if the same address is used.  This is because the last reached assignment will be used. 

Also, the logic has unregistered outputs.  Normally this isn't something that is desired.  This means critical timing paths could be due to logic in multiple modules.  Not sure if there is anything that can be done here though.

You can also add asserts for the "write to same address" case.  this can be helpful in simulation.

Link to comment
Share on other sites

15 hours ago, kc5tja said:

The bill came due because of laziness. 

My how this thread has evolved into a life of its own... and I can't resist egging it on.

In my experience stupidity and malfeasance on the corporate level rarely has to do with laziness. In order of precedence it would be... ego, attaining or defending status,  greed, the fact that most corporations mimic the armed services structure; that is decision making goes down the chain of command and facts rarely travel up the chain, and in large companies departmental tribalism. In my experience merely holding beliefs that counter-indicate prevailing commands ( or failing to cheer on views supporting those commands ) are viewed as mutiny and a threat.   

Link to comment
Share on other sites

1 hour ago, zygot said:

A comment on your latest code.

You chose to select the ieee.numeric library. Usually when I do this it's because I know that I'm going to deal with signed and unsigned types. This helps keep my focus on that pesky sign bit. So I generally assign signals to either signed or unsigned types when using ieee.numeric. In general, I use ieee.std_logic_unsigned or ieee.std_logic_signed libraries. The details involved in signals that carry signed or unsigned values can trip you up if you aren't careful. This is especially true if you use multipliers or fractional arithmetic ( a whole fascinating subject in it's own right ). Here's where a good textbook or coding guidelines come into play. A std_logic_vector can always hold signed values but you have to do the bookkeeping. The takeaway is that nothing is chosen randomly, you need to know what consequences of your choices are before you have a large piece of code with hundreds or thousands of lines. The prep work is more important to success (or at least how long it takes to achieve success) than the implementation in my experience. Here is where Dan might point out why he prefers Verilog which has a lineage more like C than VHDL which ADA-like. A lot of nitty-gritty details that your friendly ( or perhaps not so friendly... ) C compiler will take care of for you becomes your responsibility in logic design.

PS. so I've avoided directly commenting you your original question until now leaving that up to others.

Oh, and one little problem with using good code as a guide is that it probably doesn't have the commentary that points out why the code was written the way that it was... just a fair warning.

I view signed and unsigned as just different ways to look at a bucket of bits. My current practice is to favor the use of std_logic_vector for moving and storing data, and converting to unsigned for arithmetic and logical operations. I try to minimize the use of casting or conversion to types that don't preserve 9-level logic, as this can hide problems in simulation that might pop up in implementation. So if I need to perform arithmetic on a set of std_logic_vectors, I convert them to unsigned, perform the operation, and then store the results as std_logic_vectors.

Wherever possible, I try to design modules that output correct results without respect to sign, leaving it up to the user or higher-level functions to interpret the values as signed or unsigned. However I'm aware that some operations -must- be sign-aware (i.e. multiplication requiring sign extension).

For example, the key line of code for the "add" pathway in my ALU is designed to perform sign-agnostic addition and subtraction based on a set of parameters:

tempResult := unsigned("0" & operandA) + unsigned("0" & opB_adjusted) + unsigned'("" & (invert_opB xor tempCarry));

Where:
tempResult -- self-explanatory
operandA -- self-explanatory
opB_adjusted -- operand B, which is either pre-inverted (logical NOT), or passed through unchanged based on the value of:
invert_opB -- a std_logic parameter where '1' specifies to invert operand B and '0' specifies to pass it through
tempCarry -- a std_logic value representing the carry-in ANDed with the carry_enable parameter

Given that:
 - The the carry-in bit is represented as '1' = there was a carry or a borrow, and '0' = there was no carry or borrow, and
 - Two's complement negation is effectively one's complement negation + 1, and
 - A - B = A + -B

The line of code above provides results with or without carry that will be correct regardless of interpretation as signed or unsigned. By XOR-ing invert_opB with tempCarry, that means if carry/borrow are enabled, the carry bit (if set) will effectively be added during addition with carry, or subtracted during subtraction with borrow (by essentially withholding the "+ 1" normally used in two's complement negation). If carry/borrow isn't enabled, tempCarry is guaranteed to be '0' by the earlier AND operation, and invert_opB will be '1' only if subtraction is being performed (completing the two's-complement negation).

I'm sure this has been implemented in a better way by someone else, but it's the solution I've chosen for now at least.

Link to comment
Share on other sites

44 minutes ago, CurtP said:

I view signed and unsigned as just different ways to look at a bucket of bits.

I might go with the notion that STD_LOGIC_VECTOR is a bucket of ULOGIC bits. I've done a lot of signed and unsigned VHDL projects and I can't say that I can't support the quote above. In C abstraction is a concept to be embraced. In logic design you are responsible for any abstraction you want to imply. VHDL, like ADA is a strongly typed language and will give you lots of error messages. You can still get into trouble, even with VHDL, especially with carries, overflow, comparison etc.

Now that you've brought it up

44 minutes ago, CurtP said:

tempResult := unsigned("0" & operandA) + unsigned("0" & opB_adjusted) + unsigned'("" & (invert_opB xor tempCarry));

The ':=' implies instantaneous value assignment. It is not the same as the "<=" gets assignment. The two are not interchangeable. Confusing the two will result in "you := are_in_trouble" in a hurry. I rarely use ":=" except for simulation and special circumstances.

You should understand that VHDL was not designed for synthesis. IT is a simulation language. Most, but not all of its statements can be synthesized into logic or are supported by vendors synthesis tools. Both Altera and Xilinx will tell you what statements are supported.

Link to comment
Share on other sites

26 minutes ago, CurtP said:

I view signed and unsigned as just different ways to look at a bucket of bits. My current practice is to favor the use of std_logic_vector for moving and storing data, and converting to unsigned for arithmetic and logical operations.

You can also import just "+" and "-" from std_logic_signed as well as the conversion functions from std_logic_arith.  This way you still are required to specify signed/unsigned for "<", "*", etc...  Importing "-" from std_logic_signed will give you the unary "-", which can be used for logical induction in expressions like (-x) and (x).

Link to comment
Share on other sites

1 hour ago, xc6lx45 said:

There is no "business case" for high-end CPUs on FPGAs.
Period.

@xc6lx45,

You must like experimenting with the reaction of Africanized bee colonies to jack-hammers. I'm in a feisty mood today so what the heck...

There is absolutely a place for hard CPU core based FPGA devices like the Zynq. I don't even feel the needs to support that statement. For almost all low power applications the FPGA can't compete with a commercial uC or DSP. I tend to be more sympathetic with you on soft CPU cores using FPGA resources. The exception is when you are pursuing a project that is a labour of love. Implementing a full-stack Ethernet interface in HDL makes no sense to me. There are times when post configuration programmability might push me toward a soft processor. But then I'd use an Atmel clone that someone else's software toolchain. If someone ( I can think of someone ) makes a great soft processor that is compatible with the gcc toolchain I might be interested. By and large HDLs get almost everything done that needs to be done.

BTW there's thread in another section in the Digilent forum dedicated to just this topic... which would be a better place to post your argument.

Link to comment
Share on other sites

42 minutes ago, zygot said:

I might go with the notion that STD_LOGIC_VECTOR is a bucket of ULOGIC bits. I've done a lot of signed and unsigned VHDL projects and I can't say that I can't support the quote above. In C abstraction is a concept to be embraced. In logic design you are responsible for any abstraction you want to imply. VHDL, like ADA is a strongly typed language and will give you lots of error messages. You can still get into trouble, even with VHDL, especially with carries, overflow, comparison etc.

Now that you've brought it up

The ':=' implies instantaneous value assignment. It is not the same as the "<=" gets assignment. The two are not interchangeable. Confusing the two will result in "you := are_in_trouble" in a hurry. I rarely use ":=" except for simulation and special circumstances.

You should understand that VHDL was not designed for synthesis. IT is a simulation language. Most, but not all of its statements can be synthesized into logic or are supported by vendors synthesis tools. Both Altera and Xilinx will tell you what statements are supported.

tempResult is a 33-bit variable, used for convenience within the process. Its use is appropriately expanded upon elaboration and synthesis. It isn't used to register a value between cycles. The result signal becomes tempResult(31 downto 0) and the carry flag bit on the flag signal output becomes tempResult(32). Using a simple container module for clocking and I/O, I have synthesized and verified its operation on a Spartan7 board.

Link to comment
Share on other sites

49 minutes ago, Piasa said:

You can also import just "+" and "-" from std_logic_signed as well as the conversion functions from std_logic_arith.  This way you still are required to specify signed/unsigned for "<", "*", etc...  Importing "-" from std_logic_signed will give you the unary "-", which can be used for logical induction in expressions like (-x) and (x).

Very good to know! I feel like there are a million little things I don't know about VHDL, and the worst part is that I don't know what I don't know, haha.

Link to comment
Share on other sites

6 minutes ago, CurtP said:

tempResult is a 33-bit variable, used for convenience within the process. Its use is appropriately expanded upon elaboration and synthesis. It isn't used to register a value between cycles. The result signal becomes tempResult(31 downto 0) and the carry flag bit on the flag signal output becomes tempResult(32). Using a simple container module for clocking and I/O, I have synthesized and verified its operation on a Spartan7 board.

There are a lot of things that you can do and not have problems for any given entity... if you fully understand what you are doing. There are hours or days of debugging ahead for those who don't when all of a sudden what seemed to be good practice for one project doesn't work out so well on another. Especially when entities become components in large hierarchical designs.

More advice that you should feel free to ignore. I spent the first couple of years restricting my HDL to basic constructs and still had plenty of surprises to learn about. As time wore on I became more adventurous. Not everyone need operate this way... but probably more than do. You will, no doubt , find a coding style that suits the way that you work. Using individual operators from libraries would probably not work out for me as it does Piasa.

Link to comment
Share on other sites

Ummm. A quick reality check: So that one asks later, why did no one tell me.

There is no "business case" for high-end CPUs on FPGAs.
Period.

Meaning, it will always be dramatically slower, more expensive, less energy-efficient than a dedicated chip.
Maybe pick a Raspberry Pi as reference.

Yes, as a learning experience it's a great idea - probably the best that has shown up in amateur electronics in the last decade or so - and I don't even want to rain on anybody's parade. But, a general purpose CPU on a general-purpose FPGA just won't fly, it can never be competitive against an ASIC. It's like mining bitcoins on a PC, the basic facts are against you.

So just keep that in mind before heading out into a dead end. CPUs on FPGA, the journey is the destination.

Link to comment
Share on other sites

Tell me that I'm an idiot if you want (I really don't mind) but.... I predict a lot less cursing and frustration if you develop the FPGA craft skills before pouring hours into the implementation stage where the initial product is supposed to rival current state of the art processors. The last 20 years has given us hardware optimizations like out-of-order execution, speculative branching and the like and it's just been recently that we've been served the bill ( from a security perspective ). I get the passion. I like it. I don't get masochism. I don't get wanting an end product without wanting to understand the process to achieve it. So I'll channel your moms... "well dear, as long as it makes you happy.." 

Link to comment
Share on other sites

2 minutes ago, D@n said:

Thank you, @kc5tja!

@CurtP, I asked @kc5tja's perspective because I think it might help you put things in perspective.  Building a CPU is fun, @kc5tja describes it as addictive ;), but it will also be quite a long and frustrating journey.

Dan

Talk of pipelines is poignant for me, as one of the biggest differences between the Kestrel-2DX's KCP53000 and the Kestrel-3's KCP53010 will, in fact, be that the latter has a 5-stage (maybe 6-stage, not sure yet) pipeline.  They should otherwise be software compatible with each other.  (The other being that some form of memory protection will be introduced; probably in the form of software-managed TLBs.)

Link to comment
Share on other sites

20 minutes ago, D@n said:

@kc5tja,

Wow!  That's a nice status update, and I'm glad to hear you are moving along!

Would you offer any words of wisdom to someone just starting out with their own CPU design?

Dan

Yes.

You are going to fail.  You are going to fail hard.  You are going to fail so hard, you'll want to flip your table, walk away, curse everything as a waste of time, and never look back.

Do all of these things; except, I'd recommend not flipping that table.  I find the cursing to be cathartic, and the walking to be mind-clearing.  Maaaaaaaybeee try not to be as public about the cursing as *I* have been.  I have a reputation.  You might not, and it could damage yours.  But if you must, curse into an empty room.  Scream loud if you must.  Get it off your chest; then, get back on the wagon.

Walk away; walk far, far away.  Never look back; if you do, you'll tag some of that baggage along with you.  Drop it like a moldy sack of hot potatoes.  However, as I said before, don't flip that table!  Even though you might not look back, that doesn't mean you won't *be* back.  Life finds a way.  It always does.  It just takes longer than you'd like sometimes.

Instead, strive for small victories.  Remember where things last worked.  You are exploring a multi-faceted design *space*, not a single path on a 2-dimentional map.  My 14-step development plan I wrote above?  It's just my current vision.  It WILL change.  And so will yours.  Accept this as normal.  Frustrating!!  Absolutely!  But definitely normal!

Because after you walk away, eventually, you'll want to return.  And when you do, you can wipe the table clean, and go back to the last thing you know worked.  Pick up the pieces from there and build upon your successes.  Your progress will be a slog, but eventually, you'll find a way towards your goal.

I'll let you know when I've found mine.

 

Link to comment
Share on other sites

3 hours ago, D@n said:

Boy, I'd love to hear @kc5tja's comments on this line.  It'd be fun to hear a status on his project too, since he was last set back.  Judging from his project log since then, though, it looks like he's managed to recover from his set back.  However, as a lesson for new CPU developers, you might wish to look at the date stamps on his log.  Things like this take time.  They can also be a test of patience.

Dan

In order of mention...

Status on Kestrel Project.  I went back to working on the Kestrel-2 and creating a refinement of this architecture.  Instead of the 16-bit stack architecture CPU, however, I replaced the core with my KCP53000 CPU, a M-mode only RV64I RISC-V processor.  This has allowed me to expand the design of the computer rather significantly relative to the original design.  Kestrel-2's address space was laid out like so:

$0000 $7FFF Program RAM
$8000 $BFFF I/O space
$C000 $FFFF Video RAM

The block RAMs were pre-loaded with software to run at synthesis time.  There is no ROM, and the video display was driven at 640x200 monochrome (bitmapped).

The Kestrel-2DX, the modern incarnation of the basic concept, is substantially renovated.  As indicated above, the CPU is now a 64-bit RISC-V core, with a memory map as shown here: http://chiselapp.com/user/kc5tja/repository/kestrel-2dx/wiki/Memory Map

It has a proper ROM (which is implemented in Verilog as a giant case-statement because I don't have enough block RAMs to use as a ROM) which holds a very minimal BIOS-like thing.  This frees up quite a bit of space from RAM, where I am currently writing a dialect of Forth to serve as its system software.

This design is, however, pushing the limits of the Digilent Nexys-2 FPGA board.  Although I have plenty of logic left, the fact that the ROM is synthesized from LUTs is enough of a burden to drop the maximum clock speed to just above 26MHz, which is dangerously close to the 25MHz it's designed to run at.

Of all the computer designs I've made, I've been especially happy with this one.  Despite not being finished yet, I'm having a total blast with it, which is exactly what I wanted from my neo-retro computer designs.  It looks, feels, and behaves like a classic computer, despite having a modern 64-bit core.  I've won.  (I just need to finish Forth for it!)

The Kestrel-3 will be a new computer design with somewhat more modern capabilities.  First and foremost, it'll be my first design based around the Chisel-3 DSL.  I've finally learned enough to feel comfortable with it.  (Another personal victory!)  The K3 will be built using only open-source FPGA boards though (e.g., BlackIce and/or icoBoard Gamma), which can be targeted with the Yosys development chain.  There are several reasons for this, not the least of which is because I want to support that community.  I'm planning on a computer with two boards: one comprising the CPU and RAM, and another comprising "the chipset" of the rest of the computer (e.g., video, SD card, keyboard, sound, etc.).

Originally, I wanted to target the Altera/Terasic DE-1 FPGA board (since it's available for dirt cheap these days), but I've received enough feedback from my friends and followers of the project that they wanted to follow along but were hesitant to install Altera's ginormous IDE on their box.  They wanted something that could run reliably on a Raspberry Pi, and right now, that means Yosys.  This fundamentally changes my plans for this computer, and it's not clear I have a good design for it yet.

One thing is clear though -- the Kestrel-2DX will end up being an early development terminal for the Kestrel-3.  I eat my own dogfood.

The Set Back.  This problem still exists.  The Nexys-2's PSRAM chip remains dead to the world for me.  I've long since given up with this chip.  Near as I can tell, the *only* project that successfully reports success with it is the Nexys-2 BIST bitstream, which leads me to simply not trust this BIST.  I *have*, however, written designs to access the SRAM on the icoBoard and have successfully confirmed my ability to read and write to that board's SRAM chip.  So I'll be going that route.  Another reason to use these boards instead of the DE-1; anything more complex than basic SRAM is straight-up frightening to me.  I've been burned enough to never want to use them again.

Once I get a working platform that boots on its own with SRAM but without SDRAM, then I have a basis on which I can tweak the design and run software to exercise the SDRAM chips.  With luck, things will work.  But I want a known-good platform first and foremost.

The Future.  I never made progress with my original Kestrel-3 design or intentions.  Reverting to working on the Kestrel-2 and upgrading it to the new Kestrel-2DX design has restored my interest and faith in my abilities as a hobby hardware designer.  While I still have plans for the Kestrel-3 (see http://chiselapp.com/user/kc5tja/repository/kestrel-3/wiki/Base Specs), it's not clear how I'll achieve these goals just yet.

My current plans are to perform the following broad steps for development:

  1. Develop a dumb GPIO adapter.  If I stick with Wishbone B.4/Pipelined, this is already done.  I've been strongly considering switching to TileLink TL-UL though.  This might give me wider access to parts written by others for the RISC-V ecosystem.
  2. Develop a debug controller, where I can send read/write byte/half-word/word/double-word requests to.  Since I have access to raw GPIO on the Kestrel-2DX, this is not likely to use RS-232 framing or anything.  It'll probably be bit-banged, for simplicity's sake.  A few PMODs will be needed for this.  This will serve as a surrogate for the final CPU design that I intend.
  3. Make sure I can toggle LEDs using the debug port interactively from the Kestrel-2DX.
  4. Port my Serial Interface Adapter core to the Kestrel-2DX.  Confirm it works in loop-back mode.
  5. Port my Serial Interface Adapter to both the Kestrel-3 designs.
  6. Interactively confirm that the serial link works on the Kestrel-3 in loop-back mode.
  7. Interactively confirm that the serial link works between the 2DX and the 3.
  8. Develop final SRAM interface.
  9. Make sure I can perform basic RAM tests interactively from the Kestrel-2DX.
  10. Develop a "ROM" system using block RAMs.  (from CPU's perspective, it's ROM; from debug interface, it's RAM.)
  11. Make sure I can write to and read back from the "ROM" interactively from the Kestrel-2DX.
  12. Port the KCP53000 to run on the new platform.
  13. Write first-boot firmware that writes "Hello world" to the SIA or something.  Upload it from the Kestrel-2DX.
  14. Boot the Kestrel-3 for the first time, and hope for the best.

This will likely change as I learn more about the design.  Note how none of this even concerns itself with the graphics, sound, or other goodies I've been looking for.  Unlike the Kestrel-2DX, it doesn't even have the MGIA to fall back on.  This is because the CPU will consume the overwhelming majority of the iCE40HX8K part; I'll probably need to off-load the niceties to a slave peripheral that's PMOD-accessible.

Link to comment
Share on other sites

Regarding pipelining and registered outputs ( or inputs ).

If your logic were to be implemented in basic gates, like in the "old days", e.g. and, or, not etc. the more complex the logic the more levels of delay you would inherit. Once the combinatorial delay exceeds the target data rate ( clock period ) you will need to pipeline. Pipelining introduces a number of complexities that have to be accounted for, debugging not being the least important. Of course your Series 7 FPGA uses LUTs instead of discrete gates, but a similar fate transpires. To achieve the highest repeatable performance ( as the routing resources get used up and small changes don't result in a major placement strategy ) from your synthesis tool adding registers separating less complex combinatorial logic structures is the solution. If you are using an ASIC or gate array you might have other strategies. At the minimum this helps the synthesis and place and route tools figure out how to implement your design. An interesting experiment is to do a few designs with block memory selecting various input and output registering strategies within a somewhat complex design. See what happens to your timing and the placement of related logic. Try using block memory as asynchronous RAM. If you try creating a simple controller that can be scaled heir-archly with generic assignments and want a high clock rate you will see what I mean. I've done this. Just inserting a small high data rate pipelined structure into a very complex design and having it maintain the required timing can be difficult. This is a basic concept to master before embarking on complex projects. As to when this is necessary, there is a bit of an art ( experience ) to making the correct decision at the beginning of a project. It is not uncommon to get 80% of the way to completing a project only to find that the fundamental strategy is flawed and the way forward involves a lot more redesign and restructuring than you care to do. SOP in deadline driven commercial setting. ( that is projects that take 8 months that were supposed to take 2 when getting it right the first time would have gotten you finished in 4. )

Link to comment
Share on other sites

1 hour ago, CurtP said:

Are there any specific pros/cons to registering the outputs of the register file that I should be aware of?

Probably not.  You just need to be aware that the register outputs have some delay from the logic that is in the register file.  Registering outputs is "generally good" design.  However, it isn't always needed or possible.  It is up to you to decide the logical impact in this case and then compare against any performance benefits.

Link to comment
Share on other sites

8 minutes ago, CurtP said:

I get where you're coming from. If one isn't careful, they can extinguish their own enthusiasm for something by taking on way too big of a task, way too early.

Boy, I'd love to hear @kc5tja's comments on this line.  It'd be fun to hear a status on his project too, since he was last set back.  Judging from his project log since then, though, it looks like he's managed to recover from his set back.  However, as a lesson for new CPU developers, you might wish to look at the date stamps on his log.  Things like this take time.  They can also be a test of patience.

Dan

Link to comment
Share on other sites

5 minutes ago, zygot said:

@CurtP,

I've been following this thread much the same way as people who watch soaps... or fireworks.... the thread is throbbing with excitement.

So, I don't know if you're a genius or agog with ideas that will never be built and debugged, or someone who just wants to get to the part where tomatoes get plucked from the garden. I understand; inquiring minds want to know. I've got my own afflictions in that regard. The following comments have nothing to do with CPU design, or the excitement of discussing interesting aspects of any particular project. I'm just putting the discussion into the context of a guy who started a thread wanting feedback on a rather simple logical structure.

If I were going to build a personal manned rocket I'd want to attempt and succeed at smaller projects before strapping myself into a piece of home-built hardware and pushing the big red button. But then I'm just a pedestrian engineer. Along with succeeding at smaller but increasingly complex projects you gain a lot of skill at understanding the basic but peripheral knowledge and skills in using the tools that are needed to accomplish a complex project. This includes the Vivado tools, the languages, the bugs in the tools, best practice in implementing complex logic elements, timing, constraints, etc. I do realize that your goals and choice as to how you get there has to be your decision alone.

Were you to start a project vault project with an end goal of achieving a unique CPU with unique objectives this might be very popular and instructive to a wide audience. I'm thinking of a project consisting of a series of smaller projects culminating in one big flourish. It would produce not just code and techniques but convey the development complexities in an incremental and natural manner. I'm not saying that this is necessarily a good idea or something that you ever though about doing.... just interesting and it sure would expose a lot of those peripheral issues and strategies. My suggestion admittedly is asking a lot of you. You may have no intention of publishing any of your hard work. It would be a unique project, interesting to a lot of people and instructive to many more.

Mostly, don't let anything that I say restrain your ambition or enthusiasm.

I get where you're coming from. If one isn't careful, they can extinguish their own enthusiasm for something by taking on way too big of a task, way too early.

 

@D@n has strongly suggested that I build the peripherals for my CPU before building my CPU. This is probably the course of action I'll ultimately take (maybe building small parts of the CPU along the way). This approach could kill two birds with one stone: setting me up with a proper debug environment, and giving me some more simple projects to work on first.

Link to comment
Share on other sites

17 minutes ago, zygot said:

@xc6lx45,

You must like experimenting with the reaction of Africanized bee colonies to jack-hammers. I'm in a feisty mood today so what the heck...

Laughing out loud. I thought, gate crashing the local motorcycle club with a "Harley s*cks" T-shirt.

Labor of love, that's the point. Sometimes it's just healthy if someone states the obvious.

And Zynq, agree fully. But that's essentially an ASIC part on the die.

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...