Hacker Newsnew | past | comments | ask | show | jobs | submit | tripletao's commentslogin

This seems to show the railroads peaking around 9% of GDP. While that's lower than some of the other unsourced numbers I've seen, it's much higher than the numbers I was able to find support for myself at

https://news.ycombinator.com/item?id=44805979

The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.


It would almost always be much, much worse. Practical numerical libraries (whether implemented in hardware or software) contain lots of redundancy, because their goal is to give you an optimized primitive as close as possible to the operation you actually want. For example, the library provides an optimized tan(x) to save you from calling sin(x)/cos(x), because one nasty function evaluation (as a power series, lookup table, CORDIC, etc.) is faster than two nasty function evaluations and a divide.

Of course the redundant primitives aren't free, since they add code size or die area. In choosing how many primitives to provide, the designer of a numerical library aims to make a reasonable tradeoff between that size cost and the speed benefit.

This paper takes that tradeoff to the least redundant extreme because that's an interesting theoretical question, at the cost of transforming commonly-used operations with simple hardware implementations (e.g. addition, multiplication) into computational nightmares. I don't think anyone has found a practical application for their result yet, but that's not the point of the work.


I actually don't think this is true -

Traditional processors, even highly dedicated ones like TMUs in gpus, still require being preconfigured substantially in order to switch between sin/cos/exp2/log2 function calls, whereas a silicon implementation of an 8-layer EML machine could do that by passing a single config byte along with the inputs. If you had a 512-wide pipeline of EML logic blocks in modern silicon (say 5nm), you could get around 1 trillion elementary function evaluations per second on 2.5ghz chip. Compare this with a 96 core zen5 server CPU with AVX-512 which can do about 50-100 billion scalar-equivalent evaluations per second across all cores only for one specific unchanging function.

Take the fastest current math processors: TMUs on a modern gpu: it can calculate sin OR cos OR exp2 OR log2 in 1 cycle per shader unit... but that is ONLY for those elementary functions and ONLY if they don't change - changing the function being called incurs a huge cycle hit, and chaining the calculations also incurs latency hits. An EML coprocessor could do arcsinh(x² + ln(y)) in the same hardware block, with the same latency as a modern cpu can do a single FMA instruction.


So to clarify, you think that replacing every multiplication with 24 transcendental function evaluations (12 eml(x, y), each of which evaluates exp(x) and ln(y) plus the subtraction; see the paper's Fig 2) is somehow a win?

The fact that addition, subtraction, and multiplication run quickly on typical processors isn't arbitrary--those operations map well onto hardware, for roughly the same reasons that elementary school students can easily hand-calculate them. General transcendental functions are fundamentally more expensive in time, die area, and/or power, for the same reasons that elementary school students can't easily hand-calculate them. A primitive where all arithmetic (including addition, subtraction, or negation) involves multiple transcendental function evaluations is not computationally faster, lower-power, lower-area, or otherwise better in any other practical way.

The comments here are filled with people who seem to be unaware of this, and it's pretty weird. Do CS programs not teach computer arithmetic anymore?


For basic arithmetic, this is not required nor would it be faster, as it is not likely advantageous for bulk static transcendal functions. Where this becomes interesting is when combining them OR when chaining them where today they must come back out to the main process for reconfiguration and then re-issued.

Practical terms: Jacobian (heavily used in weather and combustion simulation): The transcendental calls, mostly exp(-E_a/RT), are the actual clock-cycle bottleneck. The GPU's SFU computes one exp2 at a time per SM. The ALU then has to convert it (exp(x) = exp2(x × log2(e))), multiply by the pre-exponential factor, and accumulate partial derivatives. It's a long serial chain for each reaction rate.

The core of this is the Arrhenius rate, (A × T^n × exp(-E_a/(R×T))), which involves an exponentiation, a division, a multiplication, and an exponential. On a GPU, that's multiple SFU calls chained with ALU ops. In an EML tree, the whole expression compiles to a single tree that flows through the pipeline in one pass.

GPU (PreJacGPU) is currently the state of the art for speed on these simulations - a moderate width 8-depth EML machine could process a very complex Jacobian as fast as the gpu can evaluate one exp(). Even on a sub-optimal 250mhz FPGA, an entire 50x50 Jacobian would be about 3.5 microseconds vs 50 microseconds PER Jacobian on an A100.

If you put that same logic path into an ASIC, you'd be about 20x the fPGA's speed - in the nanoseconds per round. And this is not like you're building one function into an ASIC it's general purpose. You just feed it a compiled tree configuration and run your data through it.

For anything like linear algebra math, which is also used here, you'd delegate that to the dedicated math functions on the processor - it wouldn't make sense to do those in this.


> The core of this is the Arrhenius rate, (A × T^n × exp(-E_a/(R×T))), which involves an exponentiation, a division, a multiplication, and an exponential. On a GPU, that's multiple SFU calls chained with ALU ops. In an EML tree, the whole expression compiles to a single tree that flows through the pipeline in one pass.

I think you're missing the reason why the GPU kicks you out of the fast path when you need that special function. The special function evaluation is fundamentally more expensive in energy, whether that cost is paid in area or time. Evaluation of the special functions with throughput similar to the arithmetic throughput would require much more area for the special functions, which for most computation isn't a good tradeoff. That's why the GPU's designers chose to make your exp2 slow.

Replacing everything with dozens of cascaded special functions makes everything uniform, but it's uniformly much worse. I feel like you're assuming that by parallelizing your "EML tree" in dedicated hardware that problem goes away; but area isn't free in either dollars or power, so it doesn't.


There is a huge market for "its faster" at the cost of efficiency, but I don't think your claim that an EML hardware block would be inherently less inefficient than the same workload running on a GPU. If you think it would be, back it up with some numbers.

A 10-stage EML pipeline would be about the size of an avx-512 instruction block on a modern CPU, in the realm of ~0.1mm2 on a 5nm process node (collectively including the FMA units behind it), at it's entirety about 1% of the CPU die. None of this suggests that even a ~500 wide 10-stage EML pipeline would be consuming anywhere near the power of a modern datacenter GPU (which wastes a lot of it's energy moving things from memory to ALU to shader core...).

Not sure if you're arguing from a hypothetical position or practical one but you seem to be narrowing your argument to "well for simple math it's less efficient" but that's not the argument being made at all.


> you seem to be narrowing your argument to "well for simple math it's less efficient" but that's not the argument being made at all.

What? Unless the thing you want to compute happens to be exactly that eml() function (no multiplication, no addition, no subtraction unless it's an exponential minus a log, etc.) or almost so, it is unquestionably less efficient. If you believe otherwise, then please provide the eml() implementation of a practically useful function of your choice (e.g. that Arrhenius rate). Then we can count the superfluous transcendental function evaluations vs. a conventional implementation, and try to understand what benefit could outweigh them.

> A 10-stage EML pipeline would be about the size of an avx-512 instruction block on a modern CPU

Can you explain where you got that conclusion? And what do you think a "10-stage EML pipeline" would be useful for? Remember that the multiply embedded in your Arrhenius rate is already 8 layers and 12 operations.

Also, can you confirm whether you're working with an LLM here? You're making a lot of unsupported and oddly specific claims that don't make sense to me, and I'm trying to understand where they're coming from.


I'm not sure what you mean by this? It's true that any Boolean operation can be expressed in terms of two-input NAND gates, but that's almost never how real IC designers work. A typical standard cell library has lots of primitives, including all common gates and up to entire flip-flops and RAMs, each individually optimized at a transistor level. Realization with NAND2 and nothing else would be possible, but much less efficient.

Efficient numerical libraries likewise contain lots of redundancy. For example, sqrt(x) is mathematically equivalent to pow(x, 0.5), but sqrt(x) is still typically provided separately and faster. Anyone who thinks that eml() function is supposed to lead directly to more efficient computation has missed the point of this (interesting) work.


Yeah, what you're going to get is more efficient proofs: you can do induction on one case to get results about elementary functions. Not sure where anyone's getting computational efficiency thoughts from this.

If you already own highly appreciated QQQ in a taxable account then your options are limited, since moving to a different ETF would realize the capital gain. It may be preferable to hold even if you think you're losing money buying SpaceX at an inflated price, if selling would lose even more in taxes.

If you own an ETF that buys SpaceX but without overweighting vs. float, then you're not contributing to the inflated price in that sense. You're still buying at the inflated price though, so the NASDAQ rule change still affects you indirectly.

I guess the point of the "wealth tax" comment is that any higher taxation of the wealthiest individuals would reduce their power to shape the rules to their favor, and a wealth tax is potentially harder to avoid than income taxes. I think most prior attempts just made them emigrate, though.


We'll happily take the exit tax.


This is correct, but for the author's example of randomizing turkeys I wouldn't bother. A modern CSPRNG is fast enough that it's usually easier just to generate lots of excess randomness (so that the remainder is nonzero but tiny compared to the quotient and thus negligible) than to reject for exactly zero remainder.

For example, the turkeys could be randomized by generating 256 bits of randomness per turkey, then sorting by that and taking the first half of the list. By a counting argument this must be biased (since the number of assignments isn't usually a power of two), but the bias is negligible.

The rejection methods may be faster, and thus beneficial in something like a Monte Carlo simulation that executes many times. Rejection methods are also often the simplest way to get distributions other than uniform. The additional complexity doesn't seem worthwhile to me otherwise though, more effort and risk of a coding mistake for no meaningful gain.


> If, say, you do the assignment from a 256 bit random number such that 4 of the possible assignments are twice as likely as the others under your randomization procedure

Your numbers don't make sense. Your number of assignments is way fewer than 2^256, so the problem the author is (mistakenly) concerned about doesn't arise--no sane method would result in any measurable deviation from equiprobable, certainly not "twice as likely".

With a larger number of turkeys and thus assignments, the author is correct that some assignments must be impossible by a counting argument. They are incorrect that it matters--as long as the process of winnowing our set to 2^256 candidates isn't measurably biased (i.e., correlated with turkey weight ex television effects), it changes nothing. There is no difference between discarding a possible assignment because the CSPRNG algorithm choice excludes it (as we do for all but 2^256) and discarding it because the seed excludes it (as we do for all but one), as long as both processes are unbiased.


typo -- meant to say 8 bit random number i.e. having 256 possibilities, convenient just because the number of assignments was close to a power of 2. If instead you use a 248-sided die and have equal probabilities for all but 4 of the assignments, the result is similar but in the other direction. Of course there are many other more subtle ways that your distribution over assignments could go wrong, I was just picking one that was easy to analyze.


Ah, then I see where you got 4 assignments and 2x probability. Then I think that is the problem the author was worried about and that it would be a real concern with those numbers, but that the much smaller number of possibilities in your example causes incorrect intuition for the 2^256-possibility case.


I think the intuition that everything will be fine in the 256 bit vs 300 bit case depends on the intuition that the assignments that you're missing will be (~close to) randomly distributed, but it's far from clear to me that you can depend on that to be true in general without carefully analyzing your procedure and how it interacts with the PRNG.


If you can find a case where this matters, then you've found a practical way to distinguish a CSPRNG seeded with true randomness from a stream of all true randomness. The cryptographers would consider that a weakness in the CSPRNG algorithm, which for the usual choices would be headline news. I don't think it's possible to prove that no such structure exists, but the world's top (unclassified) cryptographers have tried and failed to find it.


And worth noting that the "even when properly seeded with 256 bits of entropy" example in the article was intended as an extreme case, i.e. that many researchers in fact use seeds that are much less random than that.


Everyone agrees that most of the possible shuffles become impossible when a CSPRNG with 256 bits of state is used. The question is just whether that matters practically. The original author seems to imply it does, but I believe they're mistaken.

Perhaps it would help to think of the randomization in two stages. In the first, we select 2^256 members from the set of all possible permutations. (This happens when we select our CSPRNG algorithm.) In the second, we select a single member from the new set of 2^256. (This happens when we select our seed and run the CSPRNG.) I believe that measurable structure in either selection would imply a practical attack on the cryptographic algorithm used in the CSPRNG, which isn't known to exist for any common such algorithm.


Yeah, you're discarding almost all permutations, but in an unbiased manner. It seems to imply not only an attack, but that your experimental results rely strongly and precisely on some extremely esoteric (otherwise it would've been found already) property of the randomization algorithm. If you can only detect the effect of television on turkeys when using a PRNG whose output is appropriately likely to have a high dimensional vector space when formated as a binary square matrix then I think you should probably go back to the drawing board.


Are they claiming that ChaCha20 deviates measurably from equally distributed in k dimensions in tests, or just that it hasn't been proven to be equally distributed? I can't find any reference for the former, and I'd find that surprising. The latter is not surprising or meaningful, since the same structure that makes cryptanalysis difficult also makes that hard to prove or disprove.

For emphasis, an empirically measurable deviation from k-equidistribution would be a cryptographic weakness (since it means that knowing some members of the k-tuple helps you guess the others). So that would be a strong claim requiring specific support.


By O’Neill’s definition (§2.5.3 in the report) it’s definitely not equidistributed in higher dimensions (does not eventually go through every possible k-tuple for large but still reasonable k) simply because its state is too small for that. Yet this seems completely irrelevant, because you’d need utterly impossible amounts of compute to actually reject the hypothesis that a black-box bitstream generator (that is actually ChaCha20) has this property. (Or I assume you would, as such a test would be an immediate high-profile cryptography paper.)

Contrary to GP’s statement, I can’t find any claims of an actual test anywhere in the PCG materials, just “k-dimensional equdistribution: no” which I’m guessing means what I’ve just said. This is, at worst, correct but a bit terse and very slightly misleading on O’Neill’s part; how GP could derive any practical consequences from it, however, I haven’t been able to understand.


As you note, a 256-bit CSPRNG is trivially not equidistributed for a tuple of k n-bit integers when k*n > 256. For a block cipher I think it trivially is equidistributed in some cases, like AES-CTR when k*n is an integer submultiple of 256 (since the counter enumerates all the states and AES is a bijection). Maybe more cases could be proven if someone cared, but I don't think anyone does.

Computational feasibility is what matters. That's roughly what I meant by "measurable", though it's better to say it explicitly as you did. I'm also unaware of any computationally feasible way to distinguish a CSPRNG seeded once with true randomness from a stream of all true randomness, and I think that if one existed then the PRNG would no longer be considered CS.


You care when you're trying to generate random vectors which may be of a different size, and if you are biasing your sample.

Is it enough to truly matter? Maybe not, but does it also matter if 80 bit SHA1 only has 61 bits?


Nobody cares even then, because any bias due to theoretical deviation from k-equidistribution is negligible compared to the desired random variance, even if we average trials until the Sun burns out. By analogy, if we're generating an integer between 1 and 3 with an 8-bit PRNG without rejection, then we should worry about bias because 2^8 isn't a multiple of 3; but if we're using a 256-bit PRNG then we should not, even though 2^256 also isn't a multiple.

If you think there's any practical difference between a stream of true randomness and a modern CSPRNG seeded once with 256 bits of true randomness, then you should be able to provide a numerical simulation that detects it. If you (and, again, the world's leading cryptographers) are unable to adversarially create such a situation, then why are you worried that it will happen by accident?

SHA-1 is practically broken, in the sense that a practically relevant chosen-prefix attack can be performed for <$100k. This has no analogy with anything we're discussing here, so I'm not sure why you mentioned it.

You wrote:

> There are concepts like "k-dimensional equidistribution" etc. etc... where in some ways the requirements of a PRNG are far, far, higher than a cryptographically sound PRNG

I believe this claim is unequivocally false. A non-CS PRNG may be better because it's faster or otherwise easier to implement, but it's not better because it's less predictable. You've provided no reference for this claim except that PCG comparison table that I believe you've misunderstood per mananaysiempre's comments. It would be nice if you could either post something to support your claim or correct it.


He published a serosurvey that claimed to have found a signal in a positivity rate that was within the 95% CI of the false-positive rate of the test (and thus indistinguishable from zero to within the usual p < 5%). He wasn't necessarily wrong in all his conclusions, but neither were the other researchers that he rightly criticized for their own statistical gymnastics earlier.

https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...

That said, I'd put both his serosurvey and the conduct he criticized in "Most Published Research Findings Are False" in a different category from the management science paper discussed here. Those seem mostly explainable by good-faith wishful thinking and motivated reasoning to me, while that paper seems hard to explain except as a knowing fraud.


> He wasn't necessarily wrong in all his conclusions, but neither were the other researchers that he rightly criticized for their own statistical gymnastics earlier.

In hindsight, I can't see any plausible argument for an IFR actually anywhere near 1%. So how were the other researchers "not necessarily wrong"? Perhaps their results were justified by the evidence available at the time, but that still doesn't validate the conclusion.


I mean that in the context of "Most Published Research Findings Are False", he criticized work (unrelated to COVID, since that didn't exist yet) that used incorrect statistical methods even if its final conclusions happened to be correct. He was right to do so, just as Gelman was right to criticize his serosurvey--it's nice when you get the right answer by luck, but that doesn't help you or anyone else get the right answer next time.

It's also hard to determine whether that serosurvey (or any other study) got the right answer. The IFR is typically observed to decrease over the course of a pandemic. For example, the IFR for COVID is much lower now than in 2020 even among unvaccinated patients, since they almost certainly acquired natural immunity in prior infections. So high-quality later surveys showing lower IFR don't say much about the IFR back in 2020.


There were people saying right at the time in 2020 that the 1% IFR was nonsense and far too high. It wasn't something that only became visible in hindsight.

Epidemiology tends to conflate IFR and CFR, that's one of the issues Ioannidis was highlighting in his work. IFR estimates do decline over time but they decline even in the absence of natural immunity buildup, because doctors start becoming aware of more mild cases where the patient recovered without being detected. That leads to a higher number of infections with the same number of fatalities, hence lower IFR computed even retroactively, but there's no biological change happening. It's just a case of data collection limits.

That problem is what motivated the serosurvey. A theoretically perfect serosurvey doesn't have such issues. So, one would expect it to calculate a lower IFR and be a valuable type of study to do well. Part of the background of that work and why it was controversial is large parts of the public health community didn't actually want to know the true IFR because they knew it would be much lower than their initial back-of-the-envelope calculations based on e.g. news reports from China. Surveys like that should have been commissioned by governments at scale, with enough data to resolve any possible complaint, but weren't because public health bodies are just not incentivized that way. Ioannidis didn't play ball and the pro lockdown camp gave him a public beating. I think he was much closer to reality than they were, though. The whole saga spoke to the very warped incentives that come into play the moment you put the word "public" in front of something.


From what I can gather, the best estimates for pre-vaccine, 2020 Wuhan/Alpha strain IFR are about 0.5% to 0.8%, approaching 1%, depending very much on the age structure (age 75+ had an IFR of 5-15%).

The current effective IFR (very often post-vaccination or post-exposure, and of with weaker strains) is much lower. But a 1% IFR estimate in early 2020 was entirely justified and fairly accurate.

For what it's worth, epidemiologists are well aware of the distinction between IFR, CFR, and CMR (crude mortality rate = deaths/total population), and it is well known that CFR and CMR bracket IFR.


Yeah I remember reading that article at the time. Agree they're in different categories. I think Gellman's summary wasn't really supportable. It's far too harsh - he's demanding an apology because the data set used for measuring test accuracy wasn't large enough to rule out the possibility that there were no COVID cases in the entire sample, and he doesn't personally think some explanations were clear enough. But this argument relies heavily on a worst case assumption about the FP rate of the test, one which is ruled out by prior evidence (we know there were indeed people infected with SARS-CoV-2 in that region in that time).

There's the other angle of selective outrage. The case for lockdowns was being promoted based on, amongst other things, the idea that PCR tests have a false positive rate of exactly zero, always, under all conditions. This belief is nonsense although I've encountered wet lab researchers who believe it - apparently this is how they are trained. In one case I argued with the researcher for a bit and discovered he didn't know what Ct threshold COVID labs were using; after I told him he went white and admitted that it was far too high, and that he hadn't known they were doing that.

Gellman's demands for an apology seem very different in this light. Ioannidis et al not only took test FP rates into account in their calculations but directly measured them to cross-check the manufacturer's claims. Nearly every other COVID paper I read simply assumed FPs don't exist at all, or used bizarre circular reasoning like "we know this test has an FP rate of zero because it detects every case perfectly when we define a case as a positive test result". I wrote about it at the time because this problem was so prevalent:

https://medium.com/mike-hearn/pseudo-epidemics-part-ii-61cb0...

I think Gellman realized after the fact that he was being over the top in his assessment because the article has been amended since with numerous "P.S." paragraphs which walk back some of his own rhetoric. He's not a bad writer but in this case I think the overwhelming peer pressure inside academia to conform to the public health narratives got to even him. If the cost of pointing out problems in your field is that every paper you write has to be considered perfect by every possible critic from that point on, it's just another way to stop people flagging problems.


Ioannidis corrected for false positives with a point estimate rather than the confidence interval. That's better than not correcting, but not defensible when that's the biggest source of statistical uncertainty in the whole calculation. Obviously true zero can be excluded by other information (people had already tested positive by PCR), but if we want p < 5% in any meaningful sense then his serosurvey provided no new information. I think it was still an interesting and publishable result, but the correct interpretation is something like Figure 1 from Gelman's

https://sites.stat.columbia.edu/gelman/research/unpublished/...

I don't think Gelman walked anything back in his P.S. paragraphs. The only part I see that could be mistaken for that is his statement that "'not statistically significant' is not the same thing as 'no effect'", but that's trivially obvious to anyone with training in statistics. I read that as a clarification for people without that background.

We'd already discussed PCR specificity ad nauseam, at

https://news.ycombinator.com/item?id=36714034

These test accuracies mattered a lot while trying to forecast the pandemic, but in retrospect one can simply look at the excess mortality, no tests required. So it's odd to still be arguing about that after all the overrun hospitals, morgues, etc.


By walked back, what I meant is his conclusion starts by demanding an apology, saying reading the paper was a waste of time and that Ioannidis "screwed up", that he didn't "look too carefully", that Stanford has "paid a price" for being associated with him, etc.

But then in the P.P.P.S sections he's saying things like "I’m not saying that the claims in the above-linked paper are wrong." (then he has to repeat that twice because in fact that's exactly what it sounds like he's saying), and "When I wrote that the authors of the article owe us all an apology, I didn’t mean they owed us an apology for doing the study" but given he wrote extensively about how he would not have published the study, I think he did mean that.

Also bear in mind there was a followup where Ioannidis's team went the extra mile to satisfy people like Gellman and:

They added more tests of known samples. Before, their reported specificity was 399/401; now it’s 3308/3324. If you’re willing to treat these as independent samples with a common probability, then this is good evidence that the specificity is more than 99.2%. I can do the full Bayesian analysis to be sure, but, roughly, under the assumption of independent sampling, we can now say with confidence that the true infection rate was more than 0.5%.

After taking into account the revised paper, which raised the standard from high to very high, there's not much of Gellman's critique left tbh. I would respect this kind of critique more if he had mentioned the garbage-tier quality of the rest of the literature. Ioannidis' standards were still much higher than everyone else's at that time.


It's good that Ioannidis improved the analysis in response to criticism, but that doesn't mean the criticism was invalid; if anything, that's typically evidence of the opposite. As I read Gelman's complaint of wasted time and demand for an apology, it seems entirely focused on the incorrect analysis. He writes:

> The point is, if you’re gonna go to all this trouble collecting your data, be a bit more careful in the analysis!

I read that as a complaint about the analysis, not a claim that the study shouldn't have been conducted (and analyzed correctly).

Gelman's blog has exposed bad statistical research from many authors, including the management scientists under discussion here. I don't see any evidence that they applied a harsher standard to Ioannidis.


Bank of America Travel Rewards is effectively 2.625% on everything if you put >$100k in a Merrill account and redeem the rewards against "travel" (including restaurants in any location) expenses charged to the card. There's no foreign transaction fee.

The biggest downside is all the dark patterns at Merrill trying to sell you advisory services. That seems to be only upon account opening, though.


Bank of America Preferred rewards card is even better with one caveat that there is a $95 annual fee. You get the 2.625 but you also get 3.5 on travel and dining, and you also get Global Entry paid once every 4 years, and $100 in travel incidentals (bag fees, wifi on plane) every year. And you do not need to redeem against travel, you can get cash back if that's what you want.


Yes, that’s a great cashback setup, but definitely not a “no catch” one. Really not a fan of the brokerage UX and generally the BofA web/app experience.


If all you’re doing is buying and holding VOO or TTTXX or similar, I don’t see what difference any of the major brokerage websites makes. If anything, I prefer BoA as they allow you to configure at least 2 phone numbers for SMS 2FA, so I don’t have to hunt down my wife to get the 2FA code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: