• 0
steddyman

Cmod A7 massive GND noise

Question

I have created a circuit to replicate the functionality of an old 8-bit chip that I plan to use the Artix-7 on, and for development I am using a Cmod A7 to prototype.

I have found i am getting a lot of noise on the ground plane of the circuit (around 1.2v P-P).  I have decoupling capacitors around my IC's and a voltage regulator with the correct capacitors.

I thought at first the issue was with something I had missed on my circuit, but if I unplug the Cmod A7 from the DIP48 socket, noise drops to around 200 Mv P-P.

Do I need to do extra work to decouple the Cmod A7 itself?  I can see it has a lot of decoupling capacitors on-board anyway.

Thanks

Share this post


Link to post
Share on other sites

Recommended Posts

  • 0

I've managed to capture the noise that is causing my problem as per the picture below.  These spikes are at the exact distance apart of my generated clock (1Mhz).  This clock is created by generating a 40Mhz clock in the MMCM, then dividing that clock down by using a shift register.  I have the SLEW rate set at SLOW on all pins, and the drive strength set to 4.  It looks like my generated clock is generating ringing in the circuit.

 

scope_noise.jpg

Share this post


Link to post
Share on other sites
  • 0
16 hours ago, steddyman said:

It looks like my generated clock is generating ringing in the circuit.

I wouldn't jump to conclusions.... no really, I wouldn't do that. Yeah, I see the spikes that appear to be about 500 ns apart. I also see missing spikes that one would think should be there if your clock is always running. I'm more inclined to to think that I would investigate along the lines of simultaneous switching, or some other driver related  issue assuming that your probing is pristine. Frankly, I'm not sure that your one picture is sufficient to figure this out.

There are no rules ( ok, so few rules ) in debugging. You can turn off some, or all of the output drivers to test a hypothesis. Who cares if a test makes the entire circuit not work? Divide and conquer is the debugger's mantra.

Along with SLOW slew rate I assume that you've set your outputs to the lowest current drive.

As @hamsterhas pointed out identifying and correcting oversights and mistakes will certainly resolve those issues. Some things you don't have control over but you might be able to lessen the magnitude of some issues with insightful design fixes.

For some projects the CMODs are fine; I have two in use most of the time. A few simple tweaks on a re-spin could make them excellent for the use cases that they are aimed at:

  • Providing better ability to power the CMODs from an external power supply without losing USB connectivity
  • Allowing users to optionally power IO banks with a lower Vccio
  • Providing a few matched IO suitable for LVDS. I get the point about connector signal integrity but really, this class of device doesn't require the ultimate performance ( not saying that users aren't going to want that ). 2mm pin spacing or even 2.54mm spacing would be just fine.
  • providing more GND and power pins.

The list above is by no means complete.

Everything is a trade-off. One suggestion might be to have whoever specs and designs the next spin have a lot of experience under their belt incorporating the current design into embedded projects. They'll figure out what needs to be done and the resulting balancing of all of those trade-offs will result in a terrific product.

 

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0

are you sure you are registering your combinational logic before the output driver? See my "digital hazards" comment a few posts back...

It is a fairly common misunderstanding, where reality differs from simulation (*) - synchronous design methodology guarantees correct signal levels at clock edges ONLY. It gives NO GUARANTEES WHATSOVER between clocks: For example, when you have a sequence of 0 bits, there may be spikes between clock edges and the circuit is still functioning correctly as designed.

(*) it will show to some extent if you'd run e.g. "post-implementation timing simulation" with actual LUTs and routing delays. But as said, the first thing to do is check whether the outputs are registered, without any combinational logic on the output side.

Share this post


Link to post
Share on other sites
  • 0
1 hour ago, xc6lx45 said:

It is a fairly common misunderstanding, where reality differs from simulation (*) - synchronous design methodology guarantees correct signal levels at clock edges ONLY. It gives NO GUARANTEES WHATSOVER between clocks: For example, when you have a sequence of 0 bits, there may be spikes between clock edges and the circuit is still functioning correctly as designed.

I agree with this concept though I'd state it a bit differently.

Combinatorial logic can produce runt pulses and unwanted transitions due to delays in the various signal paths; even in LUT based logic. A logic simulator like ModelSim or Vivado Simulator will show these. You can 'clean up' these artifacts by using clocked registers, either in the LUTs or IOB. It's hard to tell from a scope picture if that is in play. Nevertheless, using registered logic is a good standard practice; and I join @xc6lx45in recommending it. In fact, successful digital design is based on the concept of inserting registers between combinatorial elements whenever delays approach the clock period. The idea is that the register feeds some combinatorial cloud of logic that is guaranteed to reach a steady state in a period of time that is less than the clock period clocking the registers. The length of time for this to happen depends on a number of factors and can vary with synthesis and place and route behavior.

Share this post


Link to post
Share on other sites
  • 0

Thanks for the advice as always zygot / xc6l45.

So, should I be looking for full range signals that are switching at the same point as these little spikes, rather than spikes of the same amplitude?  I can see these same little spikes on other nearby signals at the same amplitude at around the same interval.  I hadn't noticed they weren't always at that interval, that's a good spot and could be a clue.  I can try disabling some of the signals as suggested.

I had configured the drive to 4mA on all pins, but after examining the IBIS files for the Artix-7 for LVCMOS33 signal slew rates, I could see the slew rate is quite a bit slower at 8mA than it is at 4mA (about 1 ns slower).  I switched the signals to 8mA instead and that has reduced the spikes slightly.  Though most of these signals are routing via 74LVC245's to the original chip socket so it probably won't help that much.  The slew rate of the original chip is closer to 15 ns, so I made have to add some series resistors in the next version.

All outputs are registered following a standard pattern.  Every signal is clocked in the circuit via an clock sensitive always block.  Then after that, those registers are used to drive an assign statement to one of the output pins.  For example:

always @(posedge clk33)
if (rst) begin
	muxr <= 40'b1111111100000000000011111111000000000000;  
end
else begin
	muxr <= {muxr[38:0],muxr[39]};
end
assign mux = muxr[39];

Also, the Vivado tool is generating either an OBUF or an IOBUF for every output.  I was hoping to get this project to a working, but imperfect state before open sourcing it and then getting help to design a specific board with the FPGA directly on it rather than the CMOD A7.  I'm currently reading my way through the rather excellent 'High Speed Digital Design' by Howard W Johnson as recommended by another poster on this forum.  It's an excellent book but some of it is going over my head at this stage.

I agree with those improvements to the CMOD A7 platform, especially adding LVDS signals.

Share this post


Link to post
Share on other sites
  • 0
5 hours ago, steddyman said:

Also, the Vivado tool is generating either an OBUF or an IOBUF for every output.

If you don't explicitly use a primitive to assign input and output buffers the tools will infer that for you. Usually you can be lazy but in general you should get in the habit of adding the lines to your HDL. I confess to being lazy for many projects where I can get away with that. All IO pins need input or output buffers.

5 hours ago, steddyman said:

'High Speed Digital Design' by Howard W Johnson

This is an excellent, older book, and certainly an easier read for someone already familiar with the basic concepts. The analysis for both digital and PCB design has undergone some conceptual evolution since it was written; but is still valuable. 

5 hours ago, steddyman said:

I was hoping to get this project to a working, but imperfect state before open sourcing it and then getting help to design a specific board with the FPGA directly

Yeah, this was a pretty optimistic (naive) objective. Even if you are an expert in digital design and PCB design and have good (expensive) tools to help with designing a good FPGA based board the task is complicated... lot's of details that need to be taken care of. I suppose that there are PCB design shops that could allow me to spin my own FPGA board but it would entail a larger investment than I'd want to risk. No one makes the FPGA boards that I want.... but I think about it... Sometimes not being naive stops people from accomplishing objectives.

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0

OK... you might still check whether the register survives retiming. Use (* DONT_TOUCH = "true") attribute on muxr and similar registers if in doubt.

BTW, if you want to design a board and are not using much of the chip's floorspace, check other FPGA brands e.g. Lattice. They seem more straightforward in the external circuitry (see "tinyfpga" boards for a reference, or the ICE40 EVB. AFAIK it has on-chip non-volatile memory so the flash should be optional. What's left on the board is mostly decoupling capacitors...). It's also somewhat slower which should be more forgiving with regard to noise, signal integrity etc.

Edited by xc6lx45

Share this post


Link to post
Share on other sites
  • 0

Thanks, yes I have the DONT TOUCH attribute on all my input / output pins.  I added those some time ago because I noticed Vivado was sometimes optimising them out of the design.  I've checked every external IO pin in the Schematic of the Implementation, and they all have the correct BUF primitive assign to them.

With regards to the runt pulses I see on the logic analyser, I see non of these in simulation so I am confident it doesn't relate to an issue with combinational logic.  Also, their length is much shorter than my 40 Mhz main clock, and instead always matches the error for my sampling frequency on the logic analyser (either 5ns or 10ns for 200 Mhz).

Externally the FPGA is only driving a circuit at a few Mhz, so I am confident with the right amount of study work I can get this working (with help).  Thanks for the advice on the Lattice chip, once I have a workable design i'll evaluate if it will fit on something simpler like the Lattice chip or even a Max 10.

Edited by steddyman

Share this post


Link to post
Share on other sites
  • 0
On 1/1/2020 at 4:43 PM, xc6lx45 said:

In RTL, define as outputs driving zero, set DRIVE to 24 (mA, the maximum value) and SLEW-FAST.

 

Can I ask what the reason for this particular recommendation was?  Every other thing I have read suggests I need my inputs to have the SLOW slew rate, and driving minimal mA to prevent strong voltage swings.

Share this post


Link to post
Share on other sites
  • 0

The idea was to create additional GND pins by pulling outputs low with the smallest possible impedance. But I recall you didn't have spare pins to start with so it won't help.

For signal drivers, yes, try to run as slow as possible.

Share this post


Link to post
Share on other sites
  • 0

Ok, thanks for the quick response.

The only pins on the CMOD A7 currently not connected to anything are pins PIO15 and PIO16 (ADC) and PIO18.  I don't currently have those set to outputs because I wasn't sure if the ADC could or should be set that way, and only recently stopped using PIO18.

I am actually trying to recreate the C64 VIC II chip using an FPGA.  This is interfaced to the original socket on the C64 main board and outputs are buffered by 74LVC245's.  The complexity is the VIC II can become a bus master so voltages can either be flowing in from the C64 or flowing out from the Cmod A7 and normally the VIC II controls the bus on the negative edge of the clock, and the CPU on the positive edge.  I'm really struggling with this and close to giving it up.  I am probably reading ground noise all wrong, so maybe getting false positives. I was reading the GND noise by connecting the GND lead of the probe to the GND near the chip on the C64 board, and then putting the probe signal lead on a GND on my board at various points.  I was seeing at worst around 1.2 V differential but usually around 700 mV.

The actual issue I am facing is when using a logic analyzer, I am seeing frequent transitions to HIGH where there should be none on the BA line (PIO21) being fed into the IC socket from my board.  This output signal is fed by a clocked reg, but the transitions are much faster than even the internal clock pulses.  The false signals are usually at the minimum duration you would expect based on the sampling frequency of my logic analyzer (200Mhz).  My internal clock that drives all the other logic is only 40Mhz and the transitions are much shorter than this.  This led me down the path of noise, but I've exhausted my knowledge of the cause now.

Edited by steddyman

Share this post


Link to post
Share on other sites
  • 0
On 1/12/2020 at 8:55 AM, steddyman said:

With regards to the runt pulses I see on the logic analyser, I see non of these in simulation so I am confident it doesn't relate to an issue with combinational logic

Well, there's behavorial simulation and there's timing simulation. You get to have "confidence" about what your logic is doing in the real world after performing a timing simulation based on post-route designs. If your design depends on the behavior of external devices then perhaps even that's not always enough.

7 hours ago, steddyman said:

the VIC II can become a bus master

I don't know anything about the VIC II. The general strategy for debugging is to break up a complex problem into a number of simple problems and establish some sort of baseline for a number of, isolated, reduced capable operations. So, perhaps do only read only; then write only. If you have two bus masters, then only allow one doing a limited set of operations.

A common problem with bi-directional data is bus contention. Drivers don't stop driving instantaneously. Are you sure that there are periods where neither side drives a bus between directional changes?

Understand that any advice you get from here is based on a limited understanding of what you are doing. Still, based on what's been revealed so far you've gotten good leads on the usual suspects. What no one can do is debug you, except you.

Time for a quote from a childhood book on building your own telescope that was evidently read by the authors of OCTAVE.

I paraphrase; "It's easier to build a 5" (reflective) telescope and an 8" telescope than to build an 8" telescope". I'd add quicker too. By way of a quick, and possibly inadequate explanation... hand polishing a glass blank takes more than understanding the basic concepts. It requires one to develop a "feel" for what's actually happening as you work, and muscle memory, and ever increasing fine  motor skill. The skill and effort to do an adequate job increases, more than linearly, with increasing glass blank diameter. This simple concept is applicable to most endeavors. The intellect accounts for only a part of what we conceive of as "knowledge". 

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0

Thanks Zygot

No, there really is no time when the bus doesn't have the potential to be driven.  Normally the CPU (6510 which is a 6502 variant) drives the address and data bus when the clock signal (also generated by the VIC II) is high, and the VIC II drives the address and data bus when the clock signal is low.  However, there are some conditions that cause the VIC II to to hijack both cycles of the clock.  I'm testing in circuit so it is very difficullt to simulate a small part of the functionality.

However, I think I might have an idea what is happening, which is unfortunately going to require a rework of the circuit.  I've realised that the 74lvc245 is quoted as 5.0v signal compatible, but on closer inspection that is with TTL only.  The VIC II and the 6510 are NMOS, which means they are CMOS signal levels.  The high level signal output of the 74lvc245 could be as low as 2.7V, when I've observed it is closer to 3.3V.  However NMOS inputs require a minimum of 3.5V to be considered a high signal.  So while it is fully compatible for inputs from the C64, it is borderline compatible when sending signals to the C64.   I'll have to swap to using the dual voltage rail version of the chip.

What is very confusing for me when probing the original VIC II chip when in circuit, is that outputs tend to be high around 3.5V and inputs tend to be high signal levels at around 4.8V.  That's the opposite of everything i've read that says Voh is 4.8V and Vih = 3.5V for CMOS.  Based on the measured voltages of my FPGA circuit emulation of the chip, my outputs signals are around 3.3V but with noise spikes of around 200mV above or below.

Edited by steddyman

Share this post


Link to post
Share on other sites
  • 0
14 hours ago, steddyman said:

No, there really is no time when the bus doesn't have the potential to be driven.  Normally the CPU (6510 which is a 6502 variant) drives the address and data bus when the clock signal (also generated by the VIC II) is high, and the VIC II drives the address and data bus when the clock signal is low.  However, there are some conditions that cause the VIC II to to hijack both cycles of the clock

That isn't really my point. Consider a signal or bus with 3 potential drivers, A, B and C. It could be uni-directional or bi-directional. Only one driver may be on at a time. If A is currently driving the signal(s) then it has to be disabled before B or C can be a driver. Disabling and enabling any of the drivers will take some ns to happen.There are a few ways to accomplish this depending on what the signal does. You could use a mux to drive the bus ( meaning that the bus only has one driver and direction ) and select which of the drivers has control. You could use tri-state buffers to turn drivers on or off. With tri-state buffers you need to allow enough time after disabling one driver before you can enable another driver to avoid contention. Then there are transmission line considerations involving length, impedance etc.

14 hours ago, steddyman said:

Based on the measured voltages of my FPGA circuit emulation of the chip, my outputs signals are around 3.3V but with noise spikes of around 200mV above or below.

Well, if your FPGA IO bank Vccio is 3.3V then you LVCMOS_33 outputs ( or inputs ) can't be higher than that. It sounds as if you are on your way to becoming a digital designer at least to the point where you've started reading data sheets. Keep reading and figuring this stuff out. Xilinx has a datasheet for your device. What you describe could be transmission line artifacts as previously described and need to be eliminated. It was still the case back in the day of 1 MHz clocked 6502 processors.

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0

>> address and data bus when the clock signal (also generated by the VIC II) is high, and the VIC II drives the address and data bus when the clock signal is low.

>> Only one driver may be on at a time. If A is currently driving the signal(s) then it has to be disabled before B or C can be a driver. Disabling and enabling any of the drivers will take some ns to happen.

... and driving a bus from more than one source at any given "in-between" time will cause massive digital spikes because it's effectively short-circuiting the power supply through high side driver, bus wire and low side driver. Now I'm guessing but it may well be that 1980-era logic chips were more forgiving in this respect because of the slow digital circuits. I could imagine that a vintage tristate driver enabled by the positive (or negative) clock half-period has a conduction angle well below 180 degrees.

I could emulate this using a higher-frequency auxillary clock (for example 5x frequency) that drives a signal bit 1 at (5x frequency example) Z111Z with Z=tristate.At megahertz-ish frequencies this should not be a design challenge, I could even use the SERDES IO blocks to get it done with minimum resources / timing hassle.

 

Edited by xc6lx45

Share this post


Link to post
Share on other sites
  • 0

Thanks.  Yes, I am using the 74LVC245 to provide direction and a tri-state bus via the OE pin.  I'll take a look at the timing of switching between sending and receiving data.

Oddly the 74LVC245 data sheet shows that the maximum output high voltage is VCC + 0.5V which I would have expected to be VCC - 0.5V.

However, can you explain my obvervations of voltage on the original NMOS chip. Voltage ranges for outputs appears to be 0 -> 3.5V.  Voltage range for inputs appears to be 0 -> 5V.  This is the reverse of this guide: https://www.allaboutcircuits.com/textbook/digital/chpt-3/logic-signal-voltage-levels/

Share this post


Link to post
Share on other sites
  • 0
32 minutes ago, steddyman said:

Oddly the 74LVC245 data sheet shows that the maximum output high voltage is VCC + 0.5V which I would have expected to be VCC - 0.5V.

There's maximum Voh and then there's maximum allowable voltage that an output pin or input pin can be exposed to without causing damage. Usually there are protection diodes in LSI/MSI packages complicating the analysis, but you don't want to exceed the maximum allowable specifications or use those diodes. You're probably thinking "how would an output pin get exposed to a voltage higher than Voh?". One way would be to drive a highly inductive load like a motor winding or relay. One way would be driving a poorly designed transmission line (network).

32 minutes ago, steddyman said:

Voltage range for inputs appears to be 0 -> 5V.

Not sure what that means. For devices that are 40 years old, using 40 year old IC processes and lithography you have to do some historical spelunking to understand what these devices were designed for. The VIC II is obviously designed with a fairly specific application in mind. 5V logic was the norm for the day. Instead of trying to connect to old hardware the whole system should fit into an Artix 200T on the Nexys Video nicely... with a bit of work... though not 100% code compatible , which is I suspect one of your requirements. Pick your poison.. or...  er, passion.

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0
1 minute ago, zygot said:

There's maximum Voh and then there's maximum allowable voltage that an output pin or input pin can be exposed to without causing damage. Usually there are protection diodes in LSI/MSI packages complicating the analysis, but you don't want to exceed the maximum allowable specifications or use those diodes.

Not sure what that means. For devices that are 40 years old, using 40 year old IC processes and lithography you have to do some historical spelunking to understand what these devices were designed for. The VIC II is obviously designed with a fairly specific application in mind. 5V logic was the norm for the day. Instead of trying to connect to old hardware the wh9ole system should fit into an Artix 200T on the Nexys Video nicely... with a bit of work... though not 100% code compatible , which is I suspect one of your requirements. Pick your poison.. or er, passion.

What this means is that if I put my Oscilloscope GND lead on the GND pin of the VIC-II, and the probe tip on an input only signal on the chip, I see signals that alternate between 0v and 5v, with low signals at closer to 0v.  When I do the same test on an output only signal, I see the high signal level at around 3.5v.  I've don't quite a bit of research on NMOS, and it is an N-type MOSFET which is still popular today with the same signal levels and noise margins as CMOS.

My purpose here is not to re-produce the C64, there are plenty of projects that do that on an FPGA.  The purpose is to re-product the VIC-II for original C64 owners but with enhanced features and connectivity.

Share this post


Link to post
Share on other sites
  • 0
1 minute ago, steddyman said:

What this means is that if I put my Oscilloscope GND lead on the GND pin of the VIC-II, and the probe tip on an input only signal on the chip, I see signals that alternate between 0v and 5v, with low signals at closer to 0v.

What this means is, that if what you are connecting your FPGA to a device that wants to pull its inputs to some level that exceeds the specification for your FPGA IO then your interface design needs to be more complicated. So far we've been treating the VIC II as a black-box. That's fine but has risks when making physical connections for empirical observations..

Share this post


Link to post
Share on other sites
  • 0
2 minutes ago, zygot said:

What this means is, that if what you are connecting your FPGA to a device that wants to pull its inputs to some level that exceeds the specification for your FPGA IO then your interface design needs to be more complicated. So far we've been treating the VIC II as a black-box. That's fine but has risks when making physical connections for empirical observations..

Apologies, I think I'm not describing myself well here.  There is no FPGA involved in this scenario.  This is an original working C64 with an original working VIC-II chip showing those voltage characteristics. 

My FPGA circuit will replace this chip, but everything is buffered to the FPGA using 5.0V tolerant buffer IC's.

Share this post


Link to post
Share on other sites
  • 0
2 hours ago, steddyman said:

My FPGA circuit will replace this chip, but everything is buffered to the FPGA using 5.0V tolerant buffer IC's.

I got that.

The datasheet for the TI version of your buffer states:

"VO Voltage range applied to any output in the high or low state (2) (3) –0.5 (min)   VCC + 0.5 (max)" I've italicized my own commentary to make the copy/paste operation readable.

So, if your buffer is trying to drive the VIC II and is powered by 3.3V  and the VIC II is pulling the outputs on that side to 5V you are exceeding the Absolute Maximum Ratings of your buffer. 5V tolerant is a term that generally applies to inputs.

Do read up on interfacing to CMOS devices. It's possible to encounter high current levels by driving CMOS inputs to levels well below Vih.

[edit] I should add that whenever you have mixed technologies in a design, i.e. CMOS, NMOS, PMOS, TTL etc. you have to meet all requirements and limitations for the devices involved. When dealing with an application specific device like the VIC II you could encounter some 'funky', clever, unusual design flourishes. Also be aware that while datasheets are technical and often written with input from engineers the 'deciders' of final content are from the sales/marketing departments who are not above suppressing or even failing to mention design "features" and specifications that might lead potential customers away from designing in a particular device. Developing a feeling for this is one of the intangibles associated with my telescope commentary that everyone has kindly avoided responding to. This stuff wouldn't be any fun of it were easy; and no one would pay for my consulting services. BTW, my bill is in the mail :) .

** I'm fond of the term 'decider' if only because the person who created it left that responsibility to a subordinate.... ah, the good old days.

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0

>> Oddly the 74LVC245 data sheet shows that the maximum output high voltage is VCC + 0.5V which I would have expected to be VCC - 0.5V.

That's probably for tri-state operation. Go more than 0.5 V above the high rail and the chip-internal protection diode to Vdd will conduct. Makes sense to me, at least.

Share this post


Link to post
Share on other sites
  • 0
2 hours ago, zygot said:

So, if your buffer is trying to drive the VIC II and is powered by 3.3V  and the VIC II is pulling the outputs on that side to 5V you are exceeding the Absolute Maximum Ratings of your buffer. 5V tolerant is a term that generally applies to inputs.

Thank you both again for your incredible patience in still answering my questions.

The TI spec sheet also states the following: 'Supports Mixed-Mode Signal Operation on All Ports (5-V Input/Output Voltage With 3.3-V VCC)'

My circuit isn't interacting with the VIC-II, it is replacing it but I understand your point.  It is interfacing to the 6510 and logic chips on the C64 motherboard which would be the equivalent in your statement.  What I don't understand is your statement  'pulling the outputs on that side to 5v'.  Do you mean when my circuit is driving a logic level 0 to an input on the 6510, that the 6510 will be pulling that to 5v? 

TI do a dual rail voltage version of the 74LVC245 which is called the SN74LVC4245 which has a 5V and a 3V side designed for driving CMOS.  But, my observations of the voltages on the original VIC-II don't show it is incompatible with the output ranges of the standard version.

Edited by steddyman

Share this post


Link to post
Share on other sites
  • 0
41 minutes ago, steddyman said:

What I don't understand is your statement  'pulling the outputs on that side to 5v'. 

I'm simply referencing your own observations. I don't have a datasheet for the VIC II chip and don't know its requirements,  IO structures or how it works. I can only surmise behavior based on your observations as I don't have hardware. I just don't have the information to answer all of your questions definitively.  It's possible that useful information about the system is buried online or in print somewhere.  But specifications have to be met whether you know why a device seems to behave in a particular manner or not.

The practice of reverse engineering stuff has a long and storied history; and is practiced by many more people and companies than care to admit to it. The 6502 came about by one curious guy using a microscope to painstakingly examine and understand how the MC6800 was  fabricated by looking at the die. A few simple changes to the organization and machine code and voila, a new company is formed that survives the lawyers and competes with far larger vendors. Not my style, but then again I'm content to live a rather modest life of obscurity based on a more modest net worth.

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0

I want to take this project on for two main reasons.

1. I loved the C64 back in the day and still enjoy using it today.

2. I want a much better quality video signal from the C64 on modern hardware.

The above two reasons I expect to drive me through any frustrations of learning electronics and VHDL, whereas something I don't have an emotional investment in won't.  The VIC-II has never been reverse engineered and any knowledge of it is based on observations of software running on the C64.  I don't expect to ever make any money out of this and will likely Open Source the project once I have something to share.  But I've already spent months doing it, and haven't given up yet :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now