• Welcome to the Internet Infidels Discussion Board.

The Processor Thread

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
16,692
Location
seattle
Basic Beliefs
secular-skeptic
An ARM 64 bit instruction set. All processors have the same basic instructions.

Logic
Comparison
Boy operations.Arithmetic
Comparison
Branching
Memory access


What separated Intel x86 and Morola 68k was memory architecture.

The 65k was flat, any user had full access to all memory.

x86 used segmented memory. Don't know about current processors, originally you had 64k segments. In the early days 64k was a large program. It was intended to support a multi-user OS where users or apps could be quickly switched in and out. It was called context switching, the entire state of an app was stored and another switched in.

With the 386 the complexity of the instructions to manage apps became too commentated to do by direct coding and required tools.

Real/Protected Mode. Early 286 386 users used te as fast 8086 processors, protected mode was complicated.

 
An interesting feature of computer architectures is that pre-chip computers had much more variation in data sizes than chip ones. A chip computer is one with a Turing-complete CPU on a single computer chip.

 Word (computer architecture) - "In computing, a word is the natural unit of data used by a particular processor design."

Some early ones used decimal data, but most pre-chip computers used binary data or both binary and decimal data, and all chip computers use binary data.

I'll do fixed integer sizes - some architectures support varying numbers of decimal digits.
  • Decimal ones: 1, 2, 6, 8, 10, 12, 15, 16, 23, 24, (50) - the last decimal one was in 1960
  • Binary ones: 4, 6, 8, 9, 11, 12, 15, 16, 18, 20, 24, 25, 26, 27, 30, 31, 32, 33, 34, 36, 39, 40, 48, 50, 60, 64, 65, 72, 75, 79, 100
All the chip computers have power-of-2 sizes: 4, 8, 16, 32, 64

 Signed number representations Pre-chip computers used all three main possibilities:
  • Sign-magnitude: one bit the sign and the rest the magnitude
  • Ones complement: for negative, flip all the bits
  • Twos complement: for negative, flip all the bits and add 1 in unsigned fashion
Chip computers use only twos complement.

 Floating-point arithmetic - all the more recent chip computers use the IEEE 754 standard, which was established in 1985. I won't discuss that here, but it is various conventions for the sizes of the exponent part and the fractional part, combined with various conventions for behavior under arithmetic.
 
Early computers were binary and decimal, and there was one ternary computer:  Setun (1959–1965) Yes, base-3. But after 1960, they were all binary, with some of them optionally supporting decimal numbers.

As I'd posted earlier, pre-chip computers had a variety of integer sizes and the three main signed-integer formats, but chip computers all use power-of-2 integer sizes and twos-complement signed integers.

Early computers were one-off designs, and the first computers to share instruction-set compatibility were IBM's System/360 ones, introduced in 1964. Its successors are the System/370 series, introduced in 1970, the System/390 series, in 1990, and now the Z series, in 2000.

There were some other pre-chip architecture families, like the VAX.

Early chip computers were also one-offs, though later ones formed families like the Motorola 68K, the Intel x86/x64, the SPARC, the DEC Alpha, the POWER/PowerPC, and the ARM.
 
Don't forget the Z80, 8051 and 6502. They were important.

The Commodore computer was 6502.

The DEC VAX was an 9mrtant computer. PDP.

The first company I word at used a VAX for software dependent, it had an open bus architecture and 3rd party plug in boards. You might call it the PC of the day.


 
Miscellaneous musings about processors in the 1960's.

The IBM 1620 -- the first machine I ever programmed -- was a decimal machine. It did multiplications and even additions via lookup tables in low memory! Those tables were loaded by the run deck for every job in case the previous job inadvertantly (or maliciously!) over-wrote those tables.

The CDC 6600 was the premier supercomputer of that era. (The CDC 6400 was a slower-speed variant.) It had a CPU to do the number-crunching and ten PPUs to handle I/O and job control. The CPU used 60-bit words; the PPU used 12-bit words; so ten or two 6-bit characters could be fit into the respective words. There was only one physical PPU, slow and primitive: It rotated through ten register sets to provide ten virtual PPUs. The CPU had 18-bit address registers and up to 217 60-bit words of main memory could be attached: that's almost 1 megabyte, very large for that era. Even more memory could be attached via an ECS (Extended Core Storage) option, but its contents had to be transferred to the main memory before use.

The CDC machines used one's-complement arithmetic, so there was both a positive zero and a negative zero. The only way to get negative zero as an integer arithmetic result was to add negative zero to negative zero (or equivalently to subtract positive zero from negative zero.)

The format of floating point numbers had nice properties. If the numbers were normalized they could be compared as integers and give the correct floating-point compare. The results of operating on normalized numbers were always normalized. The floating-point multiply could also be used for integer multiply. Like IEEE-754 numbers, CDC could represent and generate ±Infinity and ±Indefinite.

During the mid 1970's I became very familiar with IBM mainframes from the 370/135 all the way up to the 370/3033. If there's interest I may summarize those models.
 
The 68k and x86 had the same basic instructions. If you learned one processor you leaned them all.

Intel won the battle with Motorola for PC processor dominance because they mantled software compatibility.

Those using the 6800-09 had to rewrite code from scratch for the 68k.
 
The 68k and x86 had the same basic instructions. If you learned one processor you leaned them all.
Only if one doesn't look very closely at them. How is what they have in common much different from other CISC instruction-set architectures? Like PDP ones and the VAX one and the IBM 360/370/390/Z one.

The Intel-x86 series started off with 16-bit registers and a segmented 32-bit memory space with the addresses divided up into upper and lower halves. Later ones got a flat 32-bit address space, however.

The Motorola-68k series had 32-bit registers and a flat 32-bit address space since its beginning.
 
IBM is International Business Machines, a maker of Intimidating Big Machines.

The company has made numerous computer models with several instruction-set architecture (ISA) families over the decades.

Around 2000, it consolidated its nomenclature for its mainframe and server lines:  IBM eServer -- p, i, x, z

The pSeries was for POWER and PowerPC ISA's:

RT PC (1986; ROMP architecture) - RS/6000 (1990) - pSeries / System p (2000)

The iSeries was the first of its line to use the POWER/PowerPC family of ISA's:

IBM System/3 (1969) - System/32 (1975) - System/34 (1977) - System/36 (1983) - System/38 (1978) - AS/400 (1988) - iSeries / System i (2000)

The two lines were merged as Power Systems (2008) with POWER/PowerPC.

The third one is various Intel-x86 servers, renamed xSeries / System x (2000). It was sold off to Lenovo in 2014.

The fourth one is the zSeries:

System/360 (1964) - System/370 (1970) - 3081 (1980) - 3090 (1985) - System/390 (1990) - zSeries / System z (2000)

So IBM now only has its p and z series.
 
The 68k and x86 had the same basic instructions. If you learned one processor you leaned them all.
Only if one doesn't look very closely at them. How is what they have in common much different from other CISC instruction-set architectures? Like PDP ones and the VAX one and the IBM 360/370/390/Z one.

The Intel-x86 series started off with 16-bit registers and a segmented 32-bit memory space with the addresses divided up into upper and lower halves. Later ones got a flat 32-bit address space, however.

The Motorola-68k series had 32-bit registers and a flat 32-bit address space since its beginning.
At my first job circa 1980 we made the equivalent of a PC dedicated to telecommunications.

I was intimately familiar with x86, 6800/09, and 68k. The main difference between x86 and 68k was the memory model/architecture.

My second job was intel in Hillsborough Or back when they made computers and boards for the Multibus architecture. Widely used in industry.



As to IBM.


As sales for Apple PCs rose and were being used in business IBM became alarmed. I remember the Apple, the sound of the floppy drives.

IBM had a policy of keeping everything within IBM. A decision was made to task somebody to go off and independently develop a PC to compete with Apple. The IBM PC.

It was backplane bus architecture. Initially IBM did not release electrical specs for the bus or the BIOS code. So third party plug in board developers reverse engineered the requirements,

Without a specification some boards would work on one IBM machine but not another.

As clones were developed IBM eventually released the specs. and sold it off.

What nade he IBM PC was the open architecture, which Apple did not have.

Apple sold its sytem as turnkey, you dd not have to fiure anything out and no thrs party items.

With the IBM and clobes you had to ko something out how it worked.
 
What a cool thread. I have had or used a Commodore 64 (1987 to 1990), [got to see my neighbor's Amiga], an IBM... was it a 286, in 1990? I remember upgrading to a PC that I think was an IBM 486. LP, can you confirm the names of 1989 to 1993 IBM PCs?

I never learned a lot about the hardware, but I learned the lingo enough to be able to impress people by saying that my then-bf and I ran a small BBS on the Commodore 64 "with two 1541 hard drives," and... sigh, I've forgotten the modem. Remember Radio Shack? ohh, I miss it.

We used the C-64 for the BBS, for gaming (I got hooked on "Sid Meier's Pirates!"), and for online chatting on Q-Link People Connection, which became AOL. Online chat billed to one's phone line at $0.06 cents per minute. Our monthly phone bill was $250 to $300. I just got better work to afford it.

I don't know what sort of processors were used by those late-80s, early-90s machines. The Amiga was more powerful than the C-64, our neighbor insisted. My first PC was junk. My 2nd was the 486 (?), upon which I installed and ran fractal programs for a perpetually moving screensaver.

Intel Inside for me since 2001, or so. I've been through a few Windows machines and now am on a shitass Chromebook that sucks.
 
During the mid 1970's I became very familiar with IBM mainframes from the 370/135 all the way up to the 370/3033. If there's interest I may summarize those models.

There WAS slight interest, so I'll hint at some interesting(?) tidbits about the Clock on some 370's. I'll post details if there's real interest. But for now, I'll just post the briefest of summaries, phrased as puzzles.

370/145 CPU
Almost all digital computers have a single crystal oscillator which develops "Clock" signals whose variations are used to inform circuitry of the timing when logic signals are valid.

On the Model 145 CPU the oscillator output -- a 22.22 MHz square wave with 45 nanosecond period (+22.5/-22.5) -- is immediately fed into one input to an Exclusive-Or gate. Why?

370/158 MP
The Model 158MP is two Model 158 CPUs bolted together (via an extra cabinet primarily for cable routing), with each CPU able to access the other CPU's memory. Each processor has its own 8.696 MHz (115 nsec) clock. The two clocks are INDEPENDENT and thus not synchronized. This causes difficulty when one CPU reads control signals from the other CPU. How was this problem solved?

370/168 MP
The Model 168 is a large expensive machine with logic fitting into 4 cabinets. The 168MP has 9 cabinets; the extra cabinet is injected into the memory paths to allow either processor to access the other's memory. The Model 168MP has one 12.5 MHz (80 nsec) clock per CPU, but when the two processors are configured into MP mode, one clock is ignored and the other used to drive all clocks in the system.

The clock in the memory cabinets is reversed compared with a 4-cabinet uniprocessor: (-40,+40) instead of (+40,-40). Why?
 
Too had. Maybe they got lazy on PC processor sales. ARM probably has something to do with it.

Yes, Intel is facing a decline in its market position and financial performance, particularly compared to competitors like Nvidia and AMD. Intel has struggled to keep pace with advancements in areas like AI and has experienced significant losses in revenue and market share.

Two things made Intel.

Intel had good low cost development tools and hardware debuggers compared mostly to Motorola. A hardware debugger-emulator plugs into the processor socket and emulates the processor for hardware and code debugging.

Intel maintained backwards code compatibility as it created the 286 and on.
 
Don't forget the Z80, 8051 and 6502. They were important.

The Commodore computer was 6502.

The DEC VAX was an 9mrtant computer. PDP.

The first company I word at used a VAX for software dependent, it had an open bus architecture and 3rd party plug in boards. You might call it the PC of the day.


My first computer with a Z80B was the Excalibur64
 
My first computer was the Sym 1. About $200 in 1979.

6502 processor with a keypad to enter binary code into memory in hexadecimal.

You could turn leds on and off, red switches,write to a display, and there was digital IO. There were good books on the 6502 and it was how I learned processors.


 
Memories, memories. My first computer was an IBM 370/168 at a university. I didn't own it, of course. I did time-sharing, getting a bit of its time for me to do research work on it. That meant logging in and running command-line software, including command-line text editors. One can still do that with ssh, "Secure Shell", and a server that accepts ssh connections. It was only gradually that I moved away from such Intimidating Big Machines, because desktop computers were little better than toys by comparison. But indeed I did, as desktop computers gradually caught up.

IBM still makes successors of the 370/168, as the z-series. These are designed as super reliable super database servers.
 
What I remember is key punching a deck of cards, handing in to the computer center, and coming back the next day for a printout.
 
What I remember is key punching a deck of cards, handing in to the computer center, and coming back the next day for a printout.
I remember my first computer classes, and they involved punching a deck of cards with a keypunch machine. On this machine, I'd type in what each card was to have. I'd then put this deck into a card reader. But I'd usually get turnaround rather quickly, when a computer-center staffer gave me my printout within a few minutes.

Then I moved to a hardcopy terminal, essentially a remote-controlled typewriter that would alternate between me and the computer doing the typing.

Then to screen terminals, and I've been using them ever since. My first connections ran in command-line mode, even the text editors. I'd get the editor to write out lines so I could see what I was doing. Then I got onto editors that were more-or-less ASCII-art GUI editors, with me using the arrow keys to move the cursor to wherever I wanted to insert or delete text. Then keyboard-and-mouse systems, where I could use the mouse to move around, and where I could click and drag to select text.

This was all over the late 1970's to early 1990's, and not much has changed since then.
 
Back
Top Bottom