>As Philippe Schnoebelen discovered in 2002 [1], languages cannot reduce the difficulty of program construction or comprehension.
From a model-checking point of view. This is about taking a proof-theoretic approach...
Your last paragraph is also quite wrong: a machine learning could very well easily learn and solve an NP-complete problem, because this property does not say anything about average case complexity (and we should consider Probabilistic complexity classes, so the picture is even more "complex").
> From a model-checking point of view. This is about taking a proof-theoretic approach...
No. In complexity theory we deal with problems, and the model-checking problem is that of determining whether a program satisfies some property or not. If your logic is sound, you can certainly use an algorithm based on the logic's deductive theory (which could be type theory, but that's an unimportant detail) to decide the problem, but that can have no impact whatsoever on the complexity of the problem. The result applies to all decision procedures, be they model-theoretic or deductive (logic-theoretic).
> Your last paragraph is also quite wrong: a machine learning could very well easily learn and solve an NP-complete problem, because this property does not say anything about average case complexity
No. First, it's unclear what "average complexity" means here, but for any reasonable definition, the "average complexity" of NP-hard problems is not known to be tractable. Second, complexity theory approaches this issue (of "some instances may be easier") using parameterised complexity [1], and I'm afraid that the results for the model-checking problem - which, again, is the inherent difficulty of knowing what a program does regardless of how you do it - are not very good. I mentioned such a result in an old blog post of mine here [2]. (Parameterised complexity is more applicable than probabilistic complexity here because even if there were some reasonable distribution of random instances, it's probably not the distribution we'd care about.)
There is no escape from complexity limits, and the best we hope for is to find out that problems we're interested in have actually been easier than we thought all along. Of course, some people believe that the programs people actually write are somehow in a tractable complexity class that we've not been able to define - and maybe one day we'll discover that that's the case - but what we've seen so far suggests it isn't: If programs that people write are somehow easier to analyse, then we'd expect to see the size of programs we can soundly analyse grow at the same pace as the size of programs people write, and nothing can be further from what we've observed. The size of programs that can be "proven correct" (especially using deductive methods!) has remained largely the same for decades, while the size of programs people write has grown considerably over that period of time.
I'm not sure what to make of TFA (I don't have time right now to investigate in details, but the subject it interesting). It starts with saying you can stop generation as soon as you have an output that can't be completed -- and there's already more advanced techniques that do that. If your language is typed, then you can use a "proof tree with a hole" and check whether there's a possible completion of that tree.
References are "Type-Constrained Code Generation with Language Models" and "Statically Contextualizing Large Language Models with Typed Holes".
Then it switches to using an encoding that would be more semantic, but I think the argument is a bit flimsy: it compares chess to the plethora of languages that LLM can spout somewhat correct code for (which is behind the success of this generally incorrect approach).
What I found more dubious is that it brushed off syntactical differences to say "yeah but they're all semantically equivalent". Which, it seems to me, is kind of the main problem about this; basically any proof is an equivalence of two things, but it can be arbitrarily complicated to see it. If we consider this problem solved, then we can get better things, sure...
I think without some e.g. Haskell PoC showing great results these methods will have a hard time getting traction.
Please correct any inaccuracies or incomprehension in this comment!
On existing techniques - Type-Constrained Generation paper is discussed in the blog post (under Constrained Decoding), and I'd group typed holes in the same bucket.
The problem with those methods is that they're inference time: they don't update the weights. In this case, constrained decoding prevents the model from saying certain things, without changing what the model wants to say. This is especially problematic the more complex your type systems get, without even taking into account that type inference is for many of these undecidable.
Meaning, if I give you a starting string, in the presence of polymorphisms and lambdas you might not always be able to tell whether it completes to a term of a particular type.
On the syntactic difference: I'd gently reframe. The question isn't whether syntactically different programs are semantically equivalent, it's that regardless of which form you pick, the existing methods don't let the model learn the constructor choice.
Thank you for your reply. FTR, I find the subject very interesting and I hope there will be more work on this line of approach.
>The problem with those methods is that they're inference time
I agree, I just thought it was missing some prior art (not affiliated with these papers :-P)
What is not clear to me at all is, is this the draft of a research idea?
Or is there already some implementation coming in a later post?
It seems to me that such an idea would be workable on a given language with a given type system, but it seems to me there would be a black magic step to train a model that would work in a language-agnostic manner. Could you clarify?
There is an existing implementation validating this idea, and the plan is to make it publicly available at some point.
> It seems to me that such an idea would be workable on a given language with a given type system, but it seems to me there would be a black magic step to train a model that would work in a language-agnostic manner.
That's correct. The blog post alludes to infrastructure building as a necessary component of making that happen for that exact reason. I.e. while it's "easy" to generate a dependent pair in this way, generating an entire dependently typed AST is much more difficult. On the positive side, this is more of a software engineering effort rather than a research one.
I've had my digital art flagged a few times for various reasons (automatic copyright infringement and NSFW filters) -- so this is nothing new (in particular the artwork blocked the upload for some artist songs). The only thing is to have a reasonable appeal process. In all cases we got an automated approval after appeal, but it can put an untimely delay.
Honestly I hope that the AI filter would be much better in terms of false positive than the aforementioned one, if only because it should be easier via statistical methods.
I have been thinking a lot about a notion of self-paradoxical knowledge, meaning knowledge that actively makes your reasoning worse.
For example, knowledge of extremely rare diseases causes the mind to over-evaluate their importance by many orders of magnitude (there are many variants of this effect).
Or trying to explain some concepts of the object/subject construction tend to use a language that is grounded on the concept of a shared objective reality that furthers from the concept true understanding -- in other words, "the tao that can be named is not the tao".
I didn't think "There Is No Antimemetics Division" did very well with its premise, but the premise is quite fascinating, and it's the closest I've seen to this concept. Are there other explorations of similar ideas?
> I have been thinking a lot about a notion of self-paradoxical knowledge, meaning knowledge that actively makes your reasoning worse.
There could be a hypothetical class of ideas that just knowing about them is actively harmful. For a fictional example, imagine learning how to detect a hostile alien race that has been living with us on earth all this time. Or if one day we invent a thought experiment that induces psychosis to anyone that tries to unravel it.
I had seen this, but all the examples correspond to having an actual, external threat as a result of this knowledge.
I thought more about the buddhist parable that men don't know when they'll die, because only buddhas are able to live with this information.
I guess it's very close to 'malinformation', but this is still related to an external actor manipulating what you know with an external goal, rather than intrinsic to the information.
The opposite of this is also fascinating to me too. There are false beliefs that make people who hold them better in some metrics. Like, the idea that hard work leads to success. We all know there is some element of luck, but even so, people who discount luck and only believe in hard work tend to do better.
That's a good point. I think this one can be easily be resolved on a factual level, since hard work is one of the few variables you can actually control. But it is more interesting from an emotional point of view, since in many cases that would an article of faith with the implicit fear that it might not be true.
There are variations of this, such as composition theory in art getting good results based on completely false assumptions, but these tend to fall under epistemic underdetermination.
There is a similar path with Chinese painting. The language of painting was refined over millennia, but the last 2 centuries caused an extremely rapid integration of Western influences.
This article is interesting because language is like water to a fish, the invisible medium humans live through. Since art is more 'foreign' and 'superfluous', the change were more obvious and there was much more debate regarding this evolution than in linguistics.
I discussed with a painter in the artistic lineage of Shi Guoliang, and he told me he remembered how much that could be seen as "Western art painted with a Chinese brush".
I think the criticism was more directed towards such painters than say the Lingnan school that explicitly sought to revitalize Chinese painting through foreign influences, because it's really in the foundations of the painting -- how perspective and light are tackled through the 'scientific' system rather than the elaborate symbolic system of classical painting.
> We assert that artificial intelligence is a natural evolution of human tools developed throughout history to facilitate the creation, organization, and dissemination of ideas, and argue that it is paramount that the development and application of AI remain fundamentally human-centered.
While this is a noble goal, it seems obvious that this isn't how it usually goes.
For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be.
An unstoppable force, so any form of opposition is "luddite behavior".
Another one is easier transport and remote communication, that generally broke down the social fabric.
Or social media wreaking havoc among teen's minds.
From there, it's easy to see why the technological system might be seen as an inherent evil.
In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it.
There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.
I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".
> "free market" is often used as a dogma against companies that are actively harmful to society
This is a predominantly America-specific piece of propaganda, and it's pretty recent.
Adam Smith's ideas are primarily arguments against mercantilism (e.g. things like using tariffs to wield self-interested state power), something he showed to be against the common good. The "invisible hand" concept is used to show how self-interested action can, under conditions of *competitive markets*, lead to unintentional alignment with the common good.
Obviously that's a significant departure from the way it's commonly used today, where Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.
But the history of this is very Cold War-influenced, where "free markets" were politically positioned as alternatives to the USSR's "planned economy", and slowly pushed to depart further and further from Adam Smith's original argument about moral philosophy.
The government is a rare example of an extremely strong monopoly and not just a duopoly or a company holding significant marketshare. And yet people never seem to criticise it on those grounds despite it suffering from all of the same problems that corporate monopolies are accused of.
The amount of political control individuals have in a democracy is much less than people think. I get to vote every few years for a small preselected handful of representatives, none of whom represent me or my views, and that vote is usually meaningless anyways. In my political life none of the parties I have voted for have won an election. So tell me how much control of the government do I have? How much do any of us have? Especially when the government itself often goes against the wishes of their voters anyways. The reason democracy is preferable to a dictatorship is because it prevents the government from doing things which are widely unpopular, like committing acts of genocide and political violence. But that's it really.
Remember, governments have a monopoly in areas of life that impact everybody constantly. Private monopolies do not, in every case you can simply ignore them and not purchase their products, and by doing so you exert more control over them than you do casting a meaningless vote every few years.
The idea of democracy being will of the people is pure fantasy. It's useful to prevent governments from doing really bad things, that's it. It doesn't even matter who you vote for, broadly speaking. Just that you are able to exert that pressure.
It has been funny to watch the rise of "China is beating us" rhetoric against the steady backdrop of "mercantilism is obsolete/bad" dogma, because the elephant in the room is that China has been running a textbook mercantilist playbook.
Looks to my uneducated eye like China has been enforcing competitive markets internally, with consistent economic policies. Meanwhile the US has stopped enforcing antitrust altogether and keeps violently changing its mind about economic policy every four years.
And then externally Chinese policy is oriented around suppressing the value of its currency, which is basically a monopolist's tactic--artificially lowered prices in order to crowd out competition.
I think that's mercantilist-ish, but kind of a modern version?
It's definitely the opposite of what the US does, the currency is the world reserve and therefore drives the price of the dollar above what it would be without trade, which I guess makes exporting from the US much more difficult?
Anyone who is an expert in global economics please correct me here :)
I mean they did execute a wealthy banker a couple years ago. So I think the mercantile class occupies a different place in society there than in America
> Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.
Haven't read his book, but the idea that monopolies are good isn't typically made in a vacuum, it's made relative to alternatives, most often "ham-fisted government intervention". It's easier to take down a badly behaving monopoly than to change government, so believing monopolies are better than the alternatives seems like a decent heuristic.
>Haven't read his book, but the idea that monopolies are good isn't typically made in a vacuum, it's made relative to alternatives, most often "ham-fisted government intervention". It's easier to take down a badly behaving monopoly than to change government, so believing monopolies are better than the alternatives seems like a decent heuristic.
What? How is the first alternative poor government instead of multipolar competing companies? When was the last time a Monopoly was actually broken up in the US? ATT/Bell 50 years ago? lol
A Monopoly implies an organization powerful enough to stop competition. Seems like this solution that relies on competitors is fatally flawed. If there are enough competitors to meaningfully compete then there isn't a monopoly.
This seems like a classic straw man argument. Plutocratic oligarchs have been making the argument that private monopolies are better than representative democracy at basically any societal function for decades without any actual data.
This seems like a classic straw man argument. Plutocratic oligarchs have been making the argument that private monopolies are better than representative democracy at basically anything for decades.
Economic behavior is inherently game theoretic - agents take various actions and get some positive/negative reward as a result. Whether an agent's reward is positive or negative and of what magnitude, depends on the strategies employed by all agents. If some agents adopt new strategies, the reward calculus for everyone involved can completely change [1].
Over the past few centuries, countless new economic structures and strategies have been discovered and practiced. The rewards for the same action today and in the past can be completely different due to this.
So to me, if someone claimed more than a few decades ago that certain economic strategies and structures are good or bad, its simply not worth listening to them, unless someone reconfirms that the old finding still holds with the latest range of strategies. In that case, the credit and citation goes to that new someone, not the ghosts of the past.
Game theory is just math. As with any math, the calculations can all work out, but that says nothing of how it reflects nature. All you can say is that if the axioms are all true, then this is the necessary outcome. Look for string theory as a cautionary tale here.
Game theory assumes rational systems. But we have over 6 decades of behavior science which contradicts that fundamental assumption. Economic behavior is not necessarily rational, and subsequently it is not inherently game theoretic. You will find plenty of dogmatic, idealistic, superstitious, counter productive, etc. behaviors in an economy. You need psychology, and not just math, to describe the economics which happens in the real world.
Game theory definitely does not require rational agents. Game theory says there are agents with certain specified strategies. Whether a strategy is rational from the underlying theory of value the agent adopts is completely separate matter. For example, its very common to study agents who always do one action no matter what others do or what the reward function is. Hard to call such actors rational, but that does not stop as from studying them.
When I said rational I meant it in this way. That rational agents will perform in a way in which maximizes their utility (or reward function). In psychology we call this theory homo economicus[1]. Regardless of theories of value, perceived rewards, and utilities, human beings have biases, prejudice, superstition, dogma, etc. etc. In real economies these non-strategic behaviors are consistently observed. This is why I say that economics are not just game theories, they are psychology, sociology, religion, as much as they are zero-sum games between actors.
> Hard to call such actors rational, but that does not stop as from studying them.
This is precisely why I object to your first post. “Shut up and do the math” has not done the wonders which string theorists had hoped. Game theory is a perfectly fine way to mathematics, and to study certain strategies, but it tells us nothing about the nature of economies in the real world.
> if someone claimed more than a few decades ago that certain economic strategies and structures are good or bad
As you point out, it is all game theory.
But things that arrange for the game to be more beneficial to everyone, that align our interests more, deserve to be called "good", regardless of their inability to universally do so.
The latter would be an impossible bar for anything.
Where I find things frustrating, is when someone thinks because something is "good", it somehow becomes "enough". (Think, capitalized versions of different economic schools of thought.)
Not even sure if AI was ever "human-centered" . DARPA funded a large amount of AI reseearch from even the 1960s. The DART tool used in the Gulf War made back all of their previous investments. [1]
Alright AI above was more like DARPA funded research programs that resulted in algorithms that allowed for the logistics program to be implemented where the department of war (aka Department of Defense at the time) to have the first gulf war be more efficient to the point that they did more than break even on the grants they gave out.
I just re-read my comment (too late for edits) and there are a number of typos (including missing "not") that significantly degraded the syntax, but the point kind of came across anyway.
No it wasn't. Look at Joseph Stiglitz (Globalization and Its Discontents) and Ha-Joon Chang (Bad Samaritans, Kicking Away the Ladder) for counter-examples.
Why not, because they're too old to learn, or because the support infrastructure is not there? I believe most people are capable of continued learning, but they might need help (financial etc.) to make the transition.
This isn't correct. The deal is that the poor countries get development and increased employment, and the rich countries get lower prices. Generally speaking both types of countries get richer.
That some workers lost their jobs is a symptom of any change. I don't know why people always get upset people losing their jobs. It's like death, if no one died relatively few people would be born. If you resist job losses you reduce overall employment and economic development.
Are you serious? People get upset about losing jobs because they need jobs to pay their bills. Further, we often build our life identities around work; if you're a good car mechanic or a successful restaurant owner, you're proud of that. It's a part of you.
Having to repeatedly restart your career is risky, painful, and demoralizing. I have no problem seeing why people don't like that and why it can lead to populist backlash or even violent revolutions (as it did in the past).
By the way, to address your closing comment: people don't like dying either and tend to get upset when others die?
I don't think the point is that the transition isn't difficult. It is that there is an overall benefit that outweighs the challenges of the transition.
The sad part is that industrializing societies have not been very good at reconciling the benefits with the costs. The benefits first go to a select few and have seeped out to the masses slowly. Railroads in the US are a good example. The wealth accumulated by the Vanderbilts, Hills and Harrimans, did not get redistributed in any kind of equitable manner. However, everyday people did eventually gain a lot of benefit form of those railroads through economic expansion. (None of which address the loss of the native Americans, whose losses should also be part of the equation.)
My impression is that the transition is such an open-ended process that you can’t really call it that. It’s unclear if and when the challenges will be overcome.
> I don't think the point is that the transition isn't difficult. It is that there is an overall benefit that outweighs the challenges of the transition.
In the abstract, sure. But not when you're on the receiving end. It's like with NIMBYism: we all roll our eyes at NIMBYs until it's actually our own backyard. You're not going to convince a coal miner that they're better off learning to code. You're not going to convince a software engineer that they're better off in the mines.
You're missing my point. Job losses are a fact of life, just like death. Why should people get upset about the fact that someone might lose their job, or die? It's not amoral. These things happen constantly to millions of people, we'd we worn out. We happily send young healthy people to their death fighting in wars so we don't have to. I don't see people weeping because the armed forces exist.
Or is this just some sort of PC bullshit, that we can't talk about this sort of progress without carefully lamenting job losses? If you're not useful doing a job, why should you be employed in it? That's the bottom line.
Society is better if we sacrifice one horse and buggy driver job for two engineering jobs. The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
> Society is better if we sacrifice one horse and buggy driver job for two engineering jobs.
That's a "statistic" you're pulling out of your butt, and it's doing a lot of work. No one ever knows if something like that will actually happen.
It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
> The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
But what actually happens is our free-market society doesn't give a shit. No meaningful retraining happens, no meaningful effort goes into cushioning the blow for the "horse and buggy driver." Our society (or more accurately, the elites in charge) go tell those harmed to fuck off and deal with it.
> It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
That's where wealth redistribution (Taxation) comes in. The USA is not good at progressive taxation, but everyone could be better off if it were implemented properly.
First, you are thinking of people whose main source of wealth are assets, not income.
Also, I don't think inequality is a problem, it's better that than everyone being equal, because as history shows, even then, there are some that are more equal than others, and others are just poor.
It would be ok if people lost jobs, and the products and services become cheap correspondingly.
But it would be a problem if people lost jobs and all the product and services keep costing the same as before, and get costlier over time as before...
The ELIZA program, released in 1966, one of the first chatbots, led to the "ELIZA effect", where normal people would project human qualities upon simple programs. It prompted Joseph Weizenbaum, its author, to write "Computer Power and Human Reason" to try to dispel such errors. I bought a copy for my personal library as a kind of reassuring sanity check.
Flying back from Beijing, I had bought a lot of books. I filled my bags with it, so they were very heavy. When the agent came to try to check my backpack, he casually grabbed it, and fell on the conveyor belt trying to lift it. He looked at me with shock. "I'm done", I thought. He opened the bag, and saw a box of zongzi the university gave me, on top of the books. He instantly became radiant, gave me a pat on the back, and just indicated the way.
I know it's a joke, and they probably get only a tiny minority of cases... but the Chinese government makes a huge show of executing people that do stuff like this.
Also not sure about the usage of theater there. I'd probably swap it out for "show". Never heard theater used like that although it is pretty close to a standard idiom, "to make a show of something".
Let's take the definition of "video-game art" as the art of defining interactive experiences that open themselves up upon mastery.
This is the original definition of what video-games were at the start (Pac-Man, Space Invaders).
The mastery takes effort to learn; the game, to incentivize this effort, rewards the player when they do well, and punish them when they don't.
The almost immediate nature of the feedback loop makes learning faster than almost any other human activity.
Given the nature of the medium, you can tackle a theme (space invaders), and even a story on top of it.
This is good for critics; they know stories, they know that books are the highest form of art for intellectuals.
The currency of critics in the system (media/advertisement/entertainment industry loop) is credentialism -- except for purely independent critics you have their own platform and exist through a complex bidirectional relationship with their audience.
However, the story is almost always at odds with gameplay.
A story limits the freedom the gameplay system can respond to the player by railroading certain outcomes.
Often, adapting a story implies different scenes that cannot fit into a game genre, so it's more appropriate to a collection of mini-games rather than what people generally consider to be a game.
Video-game stories tend towards tropes that don't cause such problems for itself, such as the 'big tournament' arc.
Of course, certain genres have much more freedom (RPGs), but still a definite story means certain characters can't or have to die, etc, which remove the meaning of player choices.
The mastery approach hasn't gone away. But critics hate it; the general philosophy of the industry is inclusivity, which is at direct odds with a competitive direct ranking of players according to skills. It requires effort, and rewards innate ability -- reflex, memory, ability to make mental computations, ... are all advantages that generally directly translate into in-game advantages. So the critics industry had been relentless at disparaging the games that directly emphasized mastery (arcade designs, the infamous 'God Hand' review) and elevate what are generally called 'movie-games' that have worked at eliminating these aspects ('Last of Us', later 'God of war') to let all players experience the story fully without interacting with the gameplay in any meaningful manner.
They had to compromise because of the success of Dark Souls that brought mastery back to the forefront, but this is where the total incompetence of mainstream critics is truly glaring (see the infamous 'Cuphead' journalist moment). As a result, their critiques are rarely anything more than press releases with a final score based on production value and not based on any insight into the depth of game mechanics and systems.
I'm surprised not to see Chris Crawford mentioned, as The Art of Computer Game Design (1984) makes the central point of this article at the very beginning, and is a primary source of video-game critique.
It seems the shader transparent version is badly aliased? The effect is less noticeable on Chrome than Firefox, but it is still quite visible. This defeats the purpose of vector graphics...
It's a nice trick to play around, but that limits its usefulness.
It is indeed badly aliased. The technique demonstrated does not take into account antialiasing in the initial render, which causes this issue. There are ways to improve it, but I would advise against this approach in general since it doesn't handle these edge cases well.
From a model-checking point of view. This is about taking a proof-theoretic approach...
Your last paragraph is also quite wrong: a machine learning could very well easily learn and solve an NP-complete problem, because this property does not say anything about average case complexity (and we should consider Probabilistic complexity classes, so the picture is even more "complex").
reply