Returning to Virtual PC, I'd seen some reports about the Virtual PC for the Mac not running on the new G5 machines. Omar Shahine explains why. Its all to do with endianess. Earlier PowerPC processors were bi-endian as this pretty ancient Byte article explains:
The PowerPC is a bi-endian processor; that is, it supports both big- and little-endian addressing modes. This bi-endian architecture enables software developers to choose either mode when migrating OSes and applications from other machines. The OS establishes the endian mode in which processes execute; the default mode is big-endian. Once a mode is selected, all subsequent memory loads and stores are determined by the memory-addressing model of that mode.
To support this hardware feature, 2 bits in the MSR (machine state register) are maintained by the OS as part of the process state. One bit (ILE) specifies the endian mode in which the kernel runs when processing an interrupt; the other (LE) specifies the processor's current operating mode. Thus, the mode can be changed on a per-process basis, which is critically important for foreign OS emulation.
When an interrupt occurs, the processor saves the current MSR and loads an MSR for the interrupt-processing routine. The value of the ILE bit in the old MSR is copied into the LE bit in the new MSR. When execution resumes in the interrupted process, its MSR is reloaded with its LE and ILE bits intact.
It seems that the G5/970 chip does not have this feature, though I couldn't find anything explaining why it has been dropped.
Why don't machines all use the same endianness. Probably for the same reason that every computer manufacturer had to have their own version of Unix. There were even middle-endian machines where the number 0xDEADBEEF was stored as BE EF DE AD. William Verts suggests:
You may see a lot of discussion about the relative merits of the two formats, mostly religious arguments based on the relative merits of the PC versus the Mac. Both formats have their advantages and disadvantages.
In "Little Endian" form, assembly language instructions for picking up a 1, 2, 4, or longer byte number proceed in exactly the same way for all formats: first pick up the lowest order byte at offset 0. Also, because of the 1:1 relationship between address offset and byte number (offset 0 is byte 0), multiple precision math routines are correspondingly easy to write.
In "Big Endian" form, by having the high-order byte come first, you can always test whether the number is positive or negative by looking at the byte at offset zero. You don't have to know how long the number is, nor do you have to skip over any bytes to find the byte containing the sign information. The numbers are also stored in the order in which they are printed out, so binary to decimal routines are particularly efficient.[/quote]
Its one of those irritating seemingly unnecessary issues that has caused me extra work over the years, much like the choice of the backslash for the path separator in DOS and Windows. I often wondered whether the reason for this was sheer bloody-mindedness to be different but according to the Myths About Microsoft page:
UNIX's chdir/cd, mkdir (shortened to md), and directory tree notions were grafted onto DOS's drive letters, and the slash (/) converted to a backwards slash (\) in a move that has driven "bilingual" people crazy ever since (it wasn't for that purpose; MS had already used "/" as an option delimiter where UNIX used the "-", and felt they couldn't change that for fear of breaking backwards compatibity).