Hacker Newsnew | past | comments | ask | show | jobs | submit | hackthemack's commentslogin

Many years ago. Maybe 2005 to 2015? I had a friend who used cpanel to run a web hosting company. He made quite a bit of money doing that. He was not a programmer, but he could setup up wordpress and install plugins. I remember asking him once if he was worried he would get hacked and then lose control of his servers? Lose his customers?

He said he was worried but he had backups upon backups. I saw him restore a bunch of websites once, using cpanel, and I thought it is an amazing little bit of software with all of the click a button to setup many different things (like WAF). A real time saver and provides some guidance if you are not a unix-internet guru.


I always liked looking at the maps different people have made over the years. It is interesting how they can come out looking different but represent the same latent space.

https://www.vaultofculture.com/vault/nst/2023/02/13/zorkmaps

Something about it reminds me of force directed graphs, where you only care about the nodes and edges (the rooms and connections).


Same. I'm also a big fan of the wild poster [1], too.

[1] https://eblong.com/infocom/maps/zork-1-poster.jpg (big)


Opening a dual stack ipv4 and ipv6 does allow the service to accept both ipv4 and ipv6 connections. But I do not think that is what zadikian is getting at?

It does not address the network level identity and reachability. There is no default, globally routable mapping where owning a ipv4 automatically gives you an equivalent identity in ipv6 that others can reach without translation infrastructure. The transition mechanisms are not uniform or canonical, and that increases complexity.

6to4 was an attempt at that kind of embedding and I do not think it succeeded?

The original specification of ipv6 did not directly address a translation mechanism? It seemed to rely on, well, everyone will go dual stack and we will shut down the old ipv4 stack. I think it should have addressed that in the beginning and provided the one canonical way of doing it, perhaps with guides on timelines to get the ISP and backbone providers to get on board.


I’m not clear on who is supposed to do the translating that isn’t doing it today, or why the mapped IPv4 addresses don’t qualify. Virtually all ISPs either give you an IPv4 address or do the translation for you, and the software you write doesn’t have to care exactly how it’s set up for the most part (there’s some subtlety about stuff like MTUs, but if you’re just doing unencapsulated TCP it usually doesn’t matter). It can just use whatever comes back from DNS (including using the official mapped addresses if only an IPv4 record exists) and expect the network stack to figure it out.

None of this stuff works perfectly, but it’s powering almost every mobile internet connection in the world so I don’t understand what is missing.


You wrote "I don't understand what is missing"

ipv6 was standardized in 1995 6to4 was standardized in 2001

6to4 is not used in any meaningful way today.

What was missing was ipv6 should have had 6to4 (but better) in it, in 1995.

Now, I could go on about what is wrong with 6to4, but every new topic is just another surface area for ipv6 proponents to launch another question (I sometimes suspect in bad faith).


If you look at the reasons GitHub for instance doesn't support ipv6, it's a different set of problems from what cell carriers had to deal with.

Yeah we're talking about the same thing. So my v4 is 70.94.201.31. My ISP didn't also give me 2002:70.94.201.31 or whatever.

> My ISP didn't also give me 2002:70.94.201.31 or whatever.

Traffic to 2002:70.94.201.31/48 is tunnelled via 70.94.201.31 with-in an IPv4 packet (encapsulation):

* https://en.wikipedia.org/wiki/6to4


I mentioned 6to4 in the original comment. It doesn't fulfill this use case, it still relies on v4 packets.

They're allowed to do that if they want to. Most find there's no practical reason to ensure the addresses are related. It wouldn't help them support v6 faster.

Do they own such an address, as in, a packet sent by someone on another ISP to that address will actually reach my ISP's router over v6? Not talking about translations that use v4. If they do, I've never seen them actually give it to me.

Yes, they can just put your 32-bit IPv4 address after their 32-bit prefix, and that gives you a /64 as is standard. If they want to. But what's the advantage?

I think this is the kind of the topic that can be endlessly debated because you can not easily go back in time and test out alternate hypothesis. I will say that I do not like ipv6 because it tried to fix multiple accumulated problems. I know! How contrarian! How can you be against trying to fix things. But all of those issues made ipv6 a dual stack solution that replaced ipv4.

Address exhaustion, Routing table scalability, restore end to end routability, autoconfiguration, header simplification, mulitcast + anycast, security standardization.

Whereas, I think a lot of those things could have been solved in other ways, or more slowly. I would have preferred a ipv4.2 64 style because it would have prioritized

Address exhaustion, keeping backward operational compatibility, fewer changes to institutional knowledge, and still had incremental rollout (that I think would have occurred much more quickly than ipv6).


> keeping backward operational compatibility

It is not possible to be backwards compatibility with a larger address space


You are right that a 32 bit ipv4 stack can not understand a 64 bit packet format. The thing I am trying to get at is not native compatibility, it is operational compatibility via translation. I know, I know, you will probably say that is what ipv6 bridges do.

But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space. This would allow translation at network boundaries and let old systems continue to operate unchanged. Then the routers and systems would be upgraded incrementally. I think that is why it would have been upgraded more quickly.


> But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space

IPv6 supports that, but it ended up not getting used very much.

See https://en.wikipedia.org/wiki/List_of_IPv6_transition_mechan...


I remember reading about that a long time ago. I wonder why it never really caught on?

I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.


I don't see how it matters we forced people into ipv6 as well. Who cares. It's more about the difference in mental models that prevented adoption especially among those who run the services that are on the internet.

Your proposal (translation) is addressed as point 3B in the article.

I went and re-read point 3B. I agree that some hypothetical ipv42 faces a translation problem.

But it does not follow that address design is irrelevant. The structure of the address space directly determines whether translation can be stateless and alogrithmic.

In a hypothetical ipv42 design that preserves a deterministic embedding relationship between old and new addresses, translation at the edges could be largely stateless and mechanically reversible, to reduce coordiation overhead between operators and it makes reachability more predictable.

In our world ipv6, the transition seems to require a mix of dual stack, nat64, dns64, tunneling aproaches. The mapping between ipv4 and ipv6 is not uniformly deterministic across all deployment contexts.

Also, there is just a human factor. The mental gymnastics that go on. The perception of what is the way forward? With ipv6, it feels like everyone has to go get their ipv6 stack in order. With a hypothetical ipv42, where the ISPs and backbone providers can throw in the translation layers, it feels like, to me, they would have gotten on board much more quickly. Yeah, I know, it is just a feeling.


I agree with you about the embedded addresses, and I don't understand why the space was moved to all zeros to a bunch of other mappings.

but the utility of this isn't that high. we already know how to handle 4-4 and 6-6 traffic just fine. but if a 4 host wants to talk to a 6 host, it just doesn't have the extra bits in order to describe it, so this just doesn't facilitate 4-6 endpoint communication at all. this is true even you substitute v6 with any other layer 3 with a larger address space.

where it does help is in a unified routing backbone, that would allow v4 prefixes to be announced in the v6 routing system. which is arguably useful.


We have that, it's called ipv6. A section of the v6 address space is sectioned off to hold all v4 addresses

The embedding I believe you are referring to is not a part of the global routing model. (maybe I am wrong?) What I am describing is making that kind of declaration central to the system in a deterministic, network wide mapping of ipv4 to the larger ipv6 space. The translation in ipv6 ended up being handled by a mix of mechanisms after the fact, rather than a single, uniform mapping model that tied directly to the address structure. I think part of the problem is they did not put that front and center, at the beginning, when doing the initial specification.

How would an embedding handle the other 99.999999999999% of addresses not embedded?

At least at first, you wouldn't, you'd embed all of them. Cloudflare has 1.1.1.1, so they get 1.1.1.1:: too.

Not doing that was one of the key points of starting fresh with IPv6. Doing that would mean that you could end up with billions of routes to consider.

One reason for large address space is that those with networks could be placed sparsely and left room to grow. Thus allowing less routes in general.


Indeed doing it this way would keep the fragmentation, or at least delay fixing it. That's what these articles always overlook, the goal of ipv6 wasn't to just add more bits, it was also to defrag the routes.

I think instead of 1.1.1.1::, you could do 4:1.1.1.1::, wait for v4 to be gone, then start building new topologies in the other /8s. Not sure how hard that is, but it seems easier than what they're trying to do now.


Would it help at all? You can't just send IPv6 packets down the equivalent IPv4 path because that bext-hop router probably xoesn't understand IPv6 packets. In fact there could be no IPv6 path at all between you and the destination, so knowing where they are still wouldn't help you forward packets. If it understood them, it would have given you an IPv6 route anyway. Updating BGP to support IPv6 routes wasn't an actual problem.

There are lots of services I can't send v6 to, not because some router in the middle only understands v4 but because the service operator decided not to deal with v6.

So the idea is to surreptitiously install software on the service operator's machines that they can't disable?

It's already a bit like that, but they can and do disable it. You can see the other comments in this thread: many people disable IPv6 upon any sign of a networking problem.


No, the idea is you can turn v6 on/off, but doing so only changes the packet format and nothing else at first. There's no separate place to configure v6-specific settings because there are none. You use the same address, routes, DHCP, NAT, DNS, etc as v4, but you're limited to 32-bit addrs at first. The point is to just get people off v4.

Once v6 has reached enough adoption, you can turn off v4. Those who want to keep the addrs from v4 can, except now they get way more addresses under those too. Others can start building a clean new topology under the other prefixes without worrying about compatibility.


I don't see why anyone would change all the bits you actually need to change for some nebulous future gains. Still have to deal with new sockets and new routing decisions at least. To not really gain much from new features.

To me it looks like something that would have gained nearly no actual adoption outside some toy examples. Later you will need to anyway get new DNS, DHCP(or alternative) and so on.


That's a legit concern. If that's not interesting enough to the kind of user that wants all-new v6, instead start from today where some users are on the new v6 network, and say they added the 4:: prefix as a way to pick up the kind of user that doesn't want to change much. They'd still be compatible eventually. Though the reason I was thinking 4:: from the start would've been attractive enough is, a lot of people did use 6to4 and other halfway measures despite having no immediate gain.

Today's DNS6 DHCP6 etc are totally incompatible with v4. 4:: buys backwards-compatibility. Each can be updated to support longer addrs without caring whether you use it with v4 or v6.


> At least at first, you wouldn't, you'd embed all of them. Cloudflare has 1.1.1.1, so they get 1.1.1.1:: too.

Everyone with an IPv4 address automatically got an IPv6 allocation:

> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.

> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.

* https://en.wikipedia.org/wiki/6to4

What does it mean to have an /48? Well, a IPv6 subnet is /64, so that's 16 bits for subnets. In IPv4 land, if you take a subnet to be /24, an allocation with 16 bits worth of subnets would be a /8.

So basically, with 6to4, every person with an IPv4 address got the equilvalent of a Class A in IPv6.


This is a fake argument. Noone is arguing for backwards compatibility.

But there was also no necessity to demand reshaping networks and changing address assignment in a way that made migration extremely work intensive and hard to deploy in parallel.


How would you do it?

I wouldn't try to reinvent DHCP, kept NAT and generally attempted to keep the overall shape of a v6 network the same as v4 networks to ease transition of large deployments.

Ipv6 now has most of that - after years of resistance - which results in a mixed mess of "several ways to do it" approaches spiced with clients and equipment supporting a random set of them.


And yet 50% of the internet is using CGNAT just fine. The extra bits are just in a different place.

Yes, but CGNAT is an inherently stateful system and as a result will always be more expensive to operate per packet than a stateless router. The reason we are seeing steady (if slow) growth in native IPv6 is because the workarounds for IPv4 exhaustion cost money, and eventually upgrading equipment and putting pressure on website operators to support IPv6 becomes cheaper than growing CGNAT capacity.

How you would have implemented backward compatibility? I am interested to hear the general technical details of how this could have been possible.

I am mostly interested in two basic scenarios. With expectation that only on one side is any changes made. Host from new addressing scheme connecting to old one and receiving data back. Host from old addressing scheme connecting to one in new one and receiving data back.


There was a proposal called SIP that mostly focused on increasing address length (it got published as a historic RFC eventually): https://www.rfc-editor.org/rfc/rfc8507

It still had the problem that it made it harder for middleboxen (compared to IPv4) to look at port numbers.


Yeah, but most* companies hire for just whatever X programming language they use, and do not care if you know how to program and do not care that you could pick up whatever X is in a couple of weeks. (Anecdotally for "most", I am sure there are exceptions)

Not FAANG. They just need to know you can leetcode extremely fast in arbitrary languages.

It is my understanding that the US Government set up a system, long, long ago, where the British would spy on Americans and then the British would supply the information to the NSA, thereby the NSA is not technically spying on American citizens.

Words mean nothing. They can be interpreted how ever they need to be interpreted by those in power.

https://en.wikipedia.org/wiki/ECHELON


australia and america have the same agreement. these countries may be dragons but live in fear of losing their hoard (borrowing that analogy from https://news.ycombinator.com/item?id=47963204)

> australia and america have the same agreement

This has no basis whatsoever in Australian law.

Procuring someone else to do it on your behalf is still an offence under s 7(1) of the Telecommunications (Interception and Access) Act 1979 (Cth).

TELECOMMUNICATIONS (INTERCEPTION AND ACCESS) ACT 1979 - SECT 7

Telecommunications not to be intercepted (1) A person shall not:

  (a)   intercept;

  (b)   authorize, suffer or permit another person to intercept; or

  (c)   do any act or thing that will enable him or her or another person to intercept;
a communication passing over a telecommunications system.


How about we refer to a primary source instead.

https://www.nsa.gov/Helpful-Links/NSA-FOIA/Declassification-...

Which provision of the treaty purports to enable this conduct?


Well you have reality and "laws".

In this case we have law, which gives effect to treaties and binds the employees of the intelligence agencies, and then we have the unsupported conspiracy theories that you’re mindlessly parroting here.

Yeah, the Australian government always ensures it's agencies follow the law and punish only wrongdoers. David McBride made up the stuff he reported, right?

> unsupported conspiracy theories that you’re mindlessly parroting here.

Imagine my nonsurprise, stumbling upon more evidence for your (bigfatkitten) having a lack of essential information from the Snowden disclosures, and colloquial explanations of operations during interviews and talks.

Contested. Very supported theories.


Heard of that. If you have to do some spying, that indirect method might be preferable if the partner country’s spies are a little nonpartisan

I, perhaps, owe my career to DOS. As a kid, everyone relied on me to get their games, soundcard, and disk drives to work. Juggling IRQs HIMEM and CHKDSK. Soundblaster 16 forever!

Kinda the same, but for me, it was more of a gaming-driven motive. I learnt the basic DOS commands my observing my cousin, so I could run Prince of Persia, GORILLA.BAS, Dangerous Dave etc, even when he wasn't around (it was his 286).

Later on, when I got my own PC (a 486) I got into scripting by customising my AUTOEXEC.BAT to display a menu so I can jump into my favourite game immediately after the PC booted. Of course, I also learnt about TSRs, conventional memory, tuning CONFIG.SYS etc just so that I can run some tricky games like BioMenace and OMF2097.

I event learnt basic networking and made my own null-modem cable, because I wanted to play OMF2097 with my friends without sharing the same keyboard (we would always fight over who gets to use the right side of the keyboard, which was obviously the best side).

The first time I dealt with a virus was when I tried installing Prince of Persia 2 from a set of floppies I got from my friend. Dealing with the virus (it was one that "melted" the screen) unlocked a whole new world of malware research for me - and collecting malware became one of my hobbies. I also learnt hex editing and some assembly language because I wanted to cheat in Prince of Persia 2, and unlock shareware programs like Cheat Machine - and what I saw within the hex code of Cheat Machine blew me away, it opened another new world for me.

I built my first PC (a PIII 450, along with my first GPU - an nVidia RIVA TNT) - all parts carefully selected, so that I can play games with the best performance for the price.

In the Windows world, I was endlessly tuning my PC, diving into the registry, switching kernels (yes, there were thirdparty kernels you could install), even optimising file layout on the disk - all so that I could get the best gaming performance. I dived deep into scripting with AutoHotkey and Perl to make macros, bots and other random utilities for the games I played. After that I.. well, I could go on, but you get the picture.

So while DOS was my starting point (and a most fond memory), it was ultimately gaming that I owe my career to.


I started with c64 but stuck there with basic ( extension port was broken unfortunately could not dive into assembly there )

Then I got from someone an old 286, ( hdd was not working ) spent most of my time on "debug" command. Then I got a book for x86 assembly and DOS (interrupts etc). ( Which was kind of hard on non-english speaking country ) I still somehow recall some pages from memory :)

Dived into cracking/cheats, made even money on password recovery. How long those x86 knowledge carried me was unbelievable when I look back.


BioMenace and OMF worked out of the box for me but BioMenace took 8 minutes (!) to load. It would sit at the start screen (a load screen) hanging the entire time and then load really fast after the delay. Is this related to the configuration stuff you're mentioning?

This didn't occur on any other game for me, which I recall having over 50.


Yeah, my BioMenace used to hang at the loading screen as well, and as weird as it sounds, the fix was to move your mouse. I guess it was getting stuck waiting for a hardware interrupt or something. You could also start the game with your mouse unplugged of course.

As for OMF, it used to give me a "not enough memory" error until I got rid of all my TSRs (basically a "clean boot"), as it had a very high conventional memory requirement, almost close to 600k if memory serves me right. Actually even BioMenace had a high conventional memory requirement. It wasn't until few years later that I learnt that there was no need to get rid of all the TSRs, you could just tell DOS to load everything in the High/Upper memory areas by tweaking your CONGIG.SYS and AUTOEXEC.BAT. Not sure if this was something that was introduced in later versions of DOS or if it could do it all along. But it wasn't in any of the official manuals at the time, I found these tweaks in a game guide on some random BBS.


Yep, I once made a good living getting drivers into high memory and making those menus in config.sys. Installing network drivers and Netware client. People were quite happy to see me when I arrived.

For some reason I thought DOS had already been opened.


They have open sourced a few versions of DOS, including 4.0 I believe. This is just the latest.

For me, DOS 5.0 was the best. Would love to see that.

And of course we have both FreeDOS and SvarDOS now.


Totally, my first revision control system was unwittingly developed by me as a kid to keep progressive backups of my simcity cities.

Everything I ever needed to know about IT, I learned installing Warcraft 2 on DOS

That is sort of the inferred conclusion of the podcast, it is a bit of a racket by the big players.


I wrote a very short ebook on the subject some 10 years ago, 15 Questions About Online Advertising.

It should be available for free on Apple Books, Google Books, Kobo etc, or for 0.99 on Amazon.


I am not sure I agree with everything stated in the post. Although it is plausible.

From the places I have worked, I would say the reason there is so much bad code at big companies is nobody really cares. It is just a job. You could bust your butt and clean up a code base, or fix a 10 year old bug that has been costing the company a half a million dollars a year, and it will not increase your salary in the slightest (Well, maybe a kudos and a pat on the back but that is not guaranteed).

Office Space had it right in the scene where Peter is being interviewed by the two bobs.


My memory is a bit hazy, but I thought what you are describing is very common with people who flatline and come back? I have vague memories that a new anesthetic drug was developed and used on soldiers undergoing surgery in the Vietnam war, and there was something about it that caused the same kind of reaction in those who were put under. Again, my memory is very hazy on the subject. I should go do some research and update this comment (and I just might).

EDIT I did a little searching. I think it might have been an old report about Ketamine before it became more wide known. Apparently it was used during the Vietnam War.

https://en.wikipedia.org/wiki/Ketamine#Near-death_experience


I prefer Mullah Nasruddin's experience, which was that death is perfectly OK unless you disturb the camels, at which point they beat you. https://ia800908.us.archive.org/28/items/idries-shah-the-exp...


Amazing recommendation! I was hooked by that most powerful New York Times Bestseller endorsement but in 1600s. "Many say: I wanted to learn, but only found madness. But those who seek wisdom will not find it elsewhere."


I was going to mention ketamine. Famous for this type of effect. I don't want to belittle the meaningful experience, but the mind is a really powerful organ and it's a safer bet to treat these experiences as arising from mind rather than beyond it. Shrug.


>>it's a safer bet to treat these experiences as arising from mind rather than beyond it

Your brain has to be alive and exist normally for it to have these experiences. So its quite obvious, nothing is coming from outside of it.

I do feel like its some kind of brain rebooting itself or something like that.

Its sad babies can't tell us if they experience the same during childbirth, but I have a guess that they experience something similar as well.

Its just that the brain is starting up and checking if there is a OxDEADBEEF or a fresh boot. And giving you the primal, brain not initialising any other interface(like eyes, ears, limbs etc). You experience what life would be if only brain existed on its own without everything else apart from it.


Safer why?


Lots to say there. The last few centuries have shown that many things which previously seemed inexplicable have been convincingly explained without resort to the supernatural. So a material basis of conscious experience seems a good bet.

Related, and hinted at by my original comment: the brain is capable of generating truly profound experiences. There is a tendency to ascribe them to something 'beyond ourselves' but again, advances in medicine and neuroscience have shown that these are explicable, subject to manipulation by chemical and electrical signals, which again suggests a material basis for conscious experience.


It's true that many things have yielded to science. And yet, what we discuss (the "hard problem of consciousness") hasn't. In my opinion, the burden is on you to prove that progress in other questions implies inevitable progress on an unrelated question that hasn't budged at all.

I said this in my other comment but, when you say the brain generates truly profound experiences, you beg the question (in the philosophical sense of the phrase). It's all in the word "experience." For in order for an experience to happen, some entity has to be experiencing. For there to be an illusion, there has to be an entity being deceived. And then how do you explain that entity? It can't be illusory experiences all the way down..

Any honest person has to see the connection between experience and the material brain. But I don't think it's honest to say it's obvious that experience is entirely material. The connection is deeply mysterious and may never be understood. I personally would rather accept that than claim that I don't really exist just so that everything can be explained.


The evidence is abundant and continues to chip away at the "hard problem". For example, we can through anesthesia turn on and off conscious experience. Through various drugs we can manipulate the character of conscious awareness, inducing ecstasy, visions, abiding serenity, terror, pain, grief... all states that were previously described as ineffable.

To say we haven't made progress on understanding consciousness is to move the post; we continue narrowing the 'hard problem' and eventually it seems like there will be nothing left other than a misunderstanding, something like the resolution of Xeno's paradox.


I don't mean to be insulting but, you don't seem to understand what the hard problem is. It is not "is the brain intimately linked with conscious experience?" I would agree we've made progress on that question. It is the harder question of "why is there conscious experience at all? Why does it feel the way it does?" I would argue no progress has been made on this whatsoever, and possibly can't be done.

You can try to claim that this question is meaningless, but that doesn't seem principled to me, not to mention that it completely ignores the fact that gestures broadly all this is happening.


In light of the fact that the entire universe is perceptible only through conscious awareness, the 'hard' question is equivalent to the question "why is there anything instead of nothing?" When asked this way, it's clearly not answerable. Everything short of that seems to have a material answer.

Edit: happy to chat more about this, as it's deeply interesting to me and I do want to understand your perspective. It may need a longer form than this thread allows. I've added a link to get in contact with me on my about page.


Not answerable != immaterial/nonmeaningful.

I'd be happy to talk more as I am passionate about this. I think the idea that there is no soul is actually extremely dehumanizing, and involves someone essentially saying "I don't really exist" (even if they redefine "I exist" to mean something more Materialist, it is, in my view, still saying that). I'll ping you on bluesky.


Jacob's Ladder is a great movie based on this theme.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: