Selective training data, lora fine tuning or MOE are other solutionsZ Sure, creating a model with 100 billion parameters will yield good results, but it’s sort of like employing a million random people to play darts. Or shooting sparrows with A nuclear bomb.
There has been two attacks on the home of Sm Altman in recent days, which he for some odd reason finds perplexing. One could hope that the attack would have a similar effect as Sarah Connor’s visit to the lead engineer at Skynet, but it seems that Altman will just double down. Anyways, I wanted to do a little deep dive into OpenAIs new policy document, which mostly feels like fluff, but had a few things about «responibilty» that caught my eye.
reply