Hacker Newsnew | past | comments | ask | show | jobs | submit | kstrauser's commentslogin

A super minor nitpick: it’s jarring to see the Amiga referred to as 16 bit. It wasn’t described that way at the time: it was universally (that I saw anyway) called a 32 bit machine, and reasonably. It had a flat 32 bit address space (although the 68000 itself didn’t support all those address lines because what kind of supercomputer would need 4GB of RAM?). All the registers and operations were 32 bit. Some of the internal operations were implemented in 16 bits, but that was invisible to programmers. Newer models with definitively 32 bit CPUs like the 68060 were nearly 100% backward compatible with older models at the CPU instruction level, even if newer OSes weren’t backward compatible at the API level. In fact, the only program not forward compatible at the instruction level that I remember offhand was Microsoft’s AmigaBASIC. It used the top bits of pointers to store data because the 68000 would ignore them when accessing RAM due to that lack of address lines.

I just don’t see a way to justifiably call the Amiga a 16 bit machine. Although the A1000 had some 16 bit hardware paths, a maxed out A3000 definitely wasn’t 16 bit, and they were nearly completely compatible with each other minus newer features.

Amiga was full-on 32 bit machine. It’s weird to hear it called anything else.


While the 68000's registers are 32-bit, the data bus is 16 bit, the A1000, A2000 and A500 that defined the range had 16-bit fetching chipsets, they literally had 24-bit address buses. None of this says "32-bit". It can't be overlooked.

Many games crashed on the 32-bit clean A3000, A1200, A600, A4000 because programmers used the upper byte of addresses for their IQ or whatever. (Similar issues with ARM2 to ARM3 in Acorns, even RISC OS itself can be categorized into '26-bit' and '32-bit clean' varieties due to Acorn thinking the memory space ignores the upper 6 bits so they can store what they like there)

The competition before the Amiga's launch solidly called itself "8-bit". The next generation called itself "16-bit" to hype itself. Later machines touted their "32-bit"ness, and then came the Nintendo 64 and PSX on MIPS processors...

All the hedges you made, "don't look here, look there" can be reversed to emphasize the 16-bitness!

Does this say something about you? Did you come to the Amiga later in its life, e.g. 1991-1993, when 68020s/030s/040s were an option? Or were you there in 1985 when it debuted?


The Opteron had a 32 bit HyperTransport bus. Modern CPUs only implement 48 address lines. And yet we’d call all of those 64 bit systems. We wouldn’t call them 32 bit systems, and surely not 48 bit.

The 68k’s ISA is 32 bit through and through, however the underlying implementation looks. It did since I bought my A1000, marketed as a 32 bit system, in 1985.


> marketed as a 32 bit system, in 1985.

I'm sure there must have been some, but most of Commodore's early Amiga ads didn't mention the number of bits at all, and from looking through old magazines it doesn't seem most vendors did either.


I can see it both ways.

I remember the Amiga always being compared to other "16-bit" machines, like the Apple IIgs, Atari ST, and early Macs.

I also remember the 68000 being referred to as 16/32-bit. Still, from a programmer perspective, the 68000 looked like a 32-bit machine, similar to what Intel did with the 386DX and SX.


This is a classic dispute when it comes to the 68000. I'm inclined to agree with your perspective, actually, but my impression is that it's highly contested.

Commodore and Atari marketed their 68K machines as 16/32-bit, which is I guess technically the most correct. And other 68000-based machines, like the Sega Mega Drive/Genesis, were marketed as 16-bit - it even says it right on top of the unit!


Yeah, I second that 16 bit or 16/32 was far more commonly used than 32, due to the 16 bit bus.

The bus always seemed like the oddest part to zero in on. By analogy, an Opteron in 2003 was a 64 bit CPU with a 32 bit HyperTransport bus, but no one called an Opteron system 32 bit. The width of a particular internal implementation detail is a strange duck IMO.

I think part of it was that to hardware companies the bus width is actually extremely important - the whole system is built around it, and the programming model the software guys work with less so.

And then the other part of it is the marketing angle: everyone knew full 32-bit inside and out chips were just on the horizon. Downplaying the 68k’s 32-bitness would give them a selling point for the 68020.


All ALU operations are also more expensive with 32 bit operands. So 16 bit data bus, 24 bit address bus. Slower arithmetic with 32 bit operands. I never though of it as a 32 bit CPU.

As someone that was there, it was certainly referred to as 16 bit machine between my higschool and tiny demoscene city group.

As it followed up on our ZX Spectrum and Commodore 64 8 bit home computers.


I recall the A500 series as being thought of as 16-bit in the UK -- the 32-bit marketing started with the A1200, and devices based on it, like the CD32 (hence the name).

> the 32-bit marketing started with the A1200

That was because the A1200 was the first Amiga to have a 68020 as the native CPU on the motherboard. The 68020 had 32-bit data registers and 32-bit address registers. Earlier Amiga's were designed around the 68000 CPU which was instruction set compatible with later 680x0 CPUs (which featured backward-compatible super sets). In the 68000's data registers only had 16 data lines connected externally, requiring two cycles to read or write 32-bits and the 32-bit address registers didn't have their upper 8 bits connected to external pins, limiting the directly addressable RAM to 16MB (24-bits). These compromises allowed the CPU to fit in a 64 pin DIP package while the standard 68020 came in a 114 pin PGA package and was fully 32-bit internally and externally.

However, it's confusing because the A1200 had a lower cost version of the 68020, the 68EC020, which also didn't have the top 8 bits connected and came in a smaller 100 pin QFP package. So technically, it had the same addressable RAM limit as the 68000 (although it had other instruction set and clock speed improvements).

Prior the the A1200 (1992) here was an earlier Amiga model, the A2500 (1989), which came with a full 68020 CPU but it was a 68000-based A2000 with Commodore's A2620 add-on accelerator card pre-installed, so it had both CPUs (although the 68000 was unused when the accelerator was added).


For me at least I always remember it being referred to as 16-bit, in all the gaming and computer magazines etc. Part of the 16-bit home computers; I remember the Atari ST being referred to that way as well.

I don’t remember seeing references to 32-bit until the 386/486 days on the home computer side and Sega 32X on the console side.


In the 80s it was fairly common to consider C64, Amstrad 464 and ZX Spectrum 8 bit, while Amiga and Atari ST 16 bit. In Italy we even had two separate video game magazines: Zzap! for 8 bit and The Games Machine for 16 bit.

To someone who was around at the time, this sounds silly. Is the Commodore 64 then a 16-bit machine, because its address pointers are 16 bits? No, the Amiga and related 68000-based machines were generally considered to be 16-bit machines, and their predecessors were all considered to be 8-bit machines.

While I too am a huge fan of the legendary 68000, as well as the proud owner of many Amigas from 1985 onward, the marketing and media reports sometimes glossed over important technical details. The 68000 CPU, which all Amigas from 1985 to 1990 were designed around, does have 32-bit data and address registers but that doesn't mean it was purely a 32-bit architecture - even internally. Some important internal components like the ALU were only 16-bit. Additionally, the external data width was 16-bit, requiring two accesses to read or write a 32-bit register to RAM, which did have a meaningful performance impact since memory access is a critical bottleneck, especially in a CPU with no cache. As you note, at least this 'double pumping' was automatic and mostly hidden from programmers.

The 68000's address registers didn't have their upper 8 bits connected to external pins, limiting the directly addressable RAM to 16MB (24-bits). These external width compromises allowed the 68000 to fit in a 64 pin DIP package while the standard 68020, which did connect all 32 data and address lines, came in a 114 pin PGA package. Large packages with more pins were a significant cost while double-pumping data accesses and a 16MB limit on addressable RAM weren't significant issues for most 1980s desktop computers - especially since the 68000's elegantly orthogonal instruction set was so performant in other ways.

Thus, many of us more technically literate fans broadly thought of the 68000 as having 32-bits internally but 16-bit data / 24-bit address width externally. However, that was incorrect because the arithmetic logic unit (ALU) and two arithmetic units were also 16-bit only, generally requiring at least twice as many cycles even for purely internal 32-bit math operations, whereas the 68020 and later CPUs didn't. That's why the 68000 is probably best described as "a hybrid 16/32 bit internal architecture with 16-bit external data width and 24-bit addressing."

It gets even more confusing because some later Amiga models like the A1200 (1992) didn't have a standard 68020 but instead a lower cost version, the 68EC020, which also didn't have the top 8 address lines connected and came in a smaller 100 pin QFP package. So technically, it had the same addressable RAM limit as the 68000, although it had full 32-bit internal and external data widths, ALU, a 256 byte cache and many other other instruction set and clock speed improvements common to later 680x0 CPUs. The way a lot of us thought of the 68000's 16/32 architecture as being limited just in the memory addressing was really a more appropriate description of the difference between a full 68020 and 68EC020. The 68000's ALU being 16-bit is the inarguable smoking gun that makes it incorrect to think "it's really a 32-bit CPU internally" as I used to.

However, that should take nothing away from just how incredible the 68000 was. My first computer had the 68000's little brother, the 6809, which was generally the fastest 8-bit CPU clock-for-clock due to being an 8/16 bit design in much same way the 68000 was 16/32 bit. While the 6809 was incredibly fast, when I got a 68000-based A1000 in 1985 and programmed it in assembly language, it blew my mind how incredibly fast it was. Then in 1988 when I added an A2620 accelerator card to my A2000, it's full 68020 with 32-bit internals and direct 32-bit read/write to 4MB of RAM was like going from a Ferrari to a Lear Jet. Despite how the 68000 was confusingly marketed and inaccurately described by some media, it was truly a leap forward, but the reality is the 68020 was really the first true 32-bit CPU in the line.


Strunk & White said not to use passive voice since, what, the 1920s? “We will write a program”, or “one of us will write a program” works without passivizing it.

What if the program is to be subbed out?

Just because two guys got together a hundred years ago and wrote some stuff doesn't mean it's worth dedicating a life of writing to.

Let alone decades of arguments supporting the claim that their style guide is at best only useful for a small subset of writing, the two themselves admit that there can be no one universal styling guide in a variety of ways. You can see many examples in the text itself in which the authors seem to forget their own advice.

Consider:

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that he make every word tell.

Which could communicated much more succinctly containing exactly the same information without any extra exposition:

Vigor is concision. A piece should contain nothing unnecessary, just as a drawing has no unnecessary lines [needless repetition](and a machine no unnecessary parts). This requires not that all sentences short be short, but that every word tell.

Or even more succinctly with only the actual message:

Vigorous writing is concise. Concise writing is vigorous (If you're willing to be charitable enough to provide a second example)

That this does not require unnecessary brevity is easily inferred given that the word is "concise" meaning "free from all elaboration and superfluous detail." not "brief" meaning "short". That the writer should follow the advice is made plain by it being presented in a book of advice. The first two sentences alone (if you grant that the second sentence is necessary) contain four repetitions of the same information. If "vigorous writing is concise" then why have we said the same thing five times?


Style guides always implicitly carry context for what they are the style guides for. Most of them are for journalism in one way or another. Passive voice is clearly wrong in journalism. All actions were taken by someone. All results stem from someone's actions.

It is an error to apply those style guides blindly to mismatched contexts. Other than as an exercize in following a style guide, it is not great to teach students that they should always write in a journalistic style, because it is simply untrue. There is nothing wrong with writing "A program will be written" when it is unknown who will write a program, and it is an error to avoid the passive voice by adding incorrect details.


I don't really even subscribe to the notion that things like passive voice can be bad, but if suddenly everything we read started to be written in passive voice, I'd decry it as obnoxious.

If you’re working with a use case where that’s even possible, you need to URL-encode it like

  woman/%2Fameeni
Consider that if the language allowed trailing slashes. What would this path mean if ameeni/ happened to be a valid word?

  ameeni//ameeni
One of those would get the slash but it’s not clear which.

W3C says:

> The slash ("/", ASCII 2F hex) character is reserved for the delimiting of substrings whose relationship is hierarchical.


But you named it as though it's an AI detector.

Speaking only for myself, I'd love to have a modern netbook with great battery life and a decent keyboard. I'd carry that around with me all the time to hack on random bits of code or whatever when the mood strikes.

If I weren't completely tired of waiting for iPadOS to grow a Terminal.app, an iPad mini with a keyboard folio case would be nearly my ideal portable computer. For functionality, I'd vastly prefer something in that form factor that only supported text mode of something that had a beautiful GUI but no terminal. At least I could run emacs and fish shell there, and that'd cover 98% of my on-the-go needs.

Super bonus points if you can make the thing look cool at the same time, but that's just icing on the cake.


LOL. I copied and pasted an 87-word blog post I wrote yesterday, on my phone, via my own thumbs. It detected 4 likely AI patterns, or once every 22 words.

I'm so over this idiocy. It's gotten to the point that the "haha, gotcha!" AI claims are more annoying than AI slop itself. God forbid you use a semicolon or an em dash or an interesting sentence structure to break things up, because someone will be quick to point out the "proof" that it's machine generated.


It isn’t an AI detector. It flags valid language patterns that have become LLM-output clichés through overuse. False positives are a given.

and I'll never give up on em dashes


I've taken to telling people, that if they see me write a long piece, that lacks em-dashes, then they should assume that I am under duress, and send help.

My Samsung and LG TVs have never touched the LAN, nor will they. They have one job in life: being the HDMI display for our game consoles and Apple TVs. That's it. I'm sure they'd both like to serve me ads and report my viewing back to their servers, but they're living the life of dumb panels.

> In my opinion all clouds should only have a gateway that routes via host header for millions of customers.

This is incompatible with TCP/IP networking. In TCP connections, (sender_address, sender_port, receiver_address, receiver_port) is a unique combination. Those numbers together uniquely identify the sender talking to the receiver. For a public webserver:

* sender_address is the client machine's IP address

* sender_port is a random number from 0..65535 (not quite, but let's pretend)

* receiver_address is the webserver's IP address

* receiver_port is 443

That means it'd be impossible for one client IP to be connected to one server IP more than 65535 times. Sounds like a lot, right?

* sender_address is the outbound NAT at an office with 10,000 employees

Now each user can have at most 6.5 connections on average to the same webserver. That's probably not an issue, as long as the site isn't a major news org and nothing critical is happening. Now given your scheme:

* receiver_address is the gateway shared by 10000 websites

Now each user can have at most 6.5 connections to all of those 10000 websites combined, at once, total, period. Or put another way, 100,000,000 client/website combos would have to fit into the same 65535 possible sender_ports. Hope you don't plan on checking your webmail and buying airline tickets at the same time.


This is actually a good point. I guess 20 IPs per cloud infra company is probably too few. But maybe these cloud companies can have 20k IPs instead of 2 million?

If you multiply by 20 shared addresses, it would be 130 connections to 200000 websites.

I love and hate that movie.

> You're uniquely tagged and identifiable on every website you visit.

Almost every modern OS enables IPv6 privacy extensions, ie address randomization, by default.


On the last 64 bits, yes. On mobile phones, the first 64 bits may be fixed. This was something I argued against when I was at Vodafone Group, but didn't get any traction. That was a while back, but I'd assume that this is still the case, and that mobile phone addresses can be used for tracking.

The prefix is your globally unique identifier. "Privacy addresses" provide zero privacy.

Just like IPv4, then. Ultimately some part of the address has to identify the physical router.

No, not just like IPv4. My IP address is 192.168.0.23 right now, as are millions of others. Add in CG-NAT from my ISP and I do not have a globally unique identifier.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: