CUDA variants extend several programming languages, including C, C++ and Fortran.
None of the extended languages is the same as the same as the base language, in the same way like OpenMP C++ in not the same as C++ or OpenMP Fortran is not the same as Fortran or SYCL is not the same as C++.
The extended languages include both extensions and restrictions of the base language. In the part of a program that will run on a GPU you can do things that cannot be done in the base language, but there also parts of the base language, e.g. of C++, which are forbidden.
All these extended languages have the advantage that you can write in a single source file a complete multithreaded program, which has parts running concurrently on a CPU and part running concurrently on a GPU, but for the best results you must know the rules that apply to the language accepted by each of them. It is possible to write program that run without modification on either a CPU or a GPU, but this is paid by a lower performance on any of them, because such a program uses only generic language features that work on any of them, instead of taking advantage of specific features.
"Hot" companies with stupid managers often have such workdays.
In the case of engineers and programmers, the amount of useful completed "work" has only a very weak correlation with the length of the workdays.
Good engineers or programmers will think anyway most of the time about the problems that they must currently solve, regardless whether they are in the office or at home or in any other place, and regardless whether to an external observer they appear to be "working" or they appear to do nothing.
Programmers who spend all day typing lines of code into a computer, are more likely to not be competent programmers, because otherwise they would have found ways to automate such activities that require continuous physical involvement, making impossible the allocation of enough time for thinking about the right solution.
If whatever they do does not require true thinking, then that is the kind of job that can be done by AI agents.
The claim that it was designed for Ada was just marketing hype, like the attempt of today of selling processors "designed for AI".
The concept of iAPX 432 had been finalized before Ada won the Department of Defense competition.
iAPX 432 was designed based on the idea that such an architecture would be more suitable for high level languages, without having at that time Ada or any other specific language in mind.
The iAPX designers thought that the most important feature that would make the processor better suited for high-level languages would be to not allow the direct addressing of memory but to control the memory accesses in such a way that would prevent any accesses outside the intended memory object.
The designers have made many other mistakes, but an important mistake was that the object-based memory-access control that they implemented was far too complex in comparison with what could be implemented efficiently in the available technology. Thus they could not implement everything in one chip and they had to split the CPU in multiple chips, which created additional challenges.
Eventually, the "32-bit" iAPX432 was much slower than the 16-bit 80286, despite the fact that 80286 had also been contaminated by the ideas of 432, so it had a much too complicated memory protection mechanism, which has never been fully used in any relevant commercial product, being replaced by the much simpler paged memory of 80386.
The failure of 432 and the partial failure of 286 (a very large part of the chip implemented features that have never been used in IBM PC/AT and compatibles) are not failures of Ada, but failures of a plan to provide complex memory access protections in hardware, instead of simpler methods based on page access rights and/or comparisons with access limits under software control.
Now there are attempts to move again some parts of the memory access control to hardware, like ARM Cheri, but I do not like them. I prefer simpler methods, like the conditional traps of IBM POWER, which allow a cheaper checking of out-of-bounds accesses without any of the disadvantages of the approaches like Cheri, which need special pointers, which consume resources permanently, not only where they are needed.
I do not know much about the architecture of Rational/R1000s400, but despite that I am pretty certain the claims that it was particularly good for implementing Ada on it were not true.
Ada can be implemented on any processor with no particular difficulties. There are perceived difficulties, but those are not difficulties specific to Ada.
Ada is a language that demands correct behavior from the processor, e.g. the detection of various error conditions. The same demands should be made for any program written in any language, but the users of other computing environments have been brainwashed by vendors that they must not demand correct behavior from their computers, so that the vendors could increase their profits by not adding the circuits needed to enforce correctness.
Thus Ada may be slower than it should be on processors that do not provide appropriate means for error detection, like RISC-V.
However that does not have anything to do with the language. The same problems will affect C, if you demand that the so-called undefined behavior must be implemented as generating exceptions for signaling when errors happen. If you implement Ada in YOLO mode, like C is normally implemented, Ada will be as fast as C on any processor. If you compile C enabling the sanitizer options, it will have the same speed as normal Ada, on the same CPU.
In the case of Rational/R1000s400, besides the fact that in must have had features that would be equally useful for implementing any programming language, it is said that it also had an Ada-specific instruction, for implementing task rendez-vous.
This must have been indeed helpful for Ada implementers, but it really is not a big deal.
The text says: "the notoriously difficult to implement Ada Rendez-Vous mechanism executes in a single instruction", I do not agree with "notoriously difficult".
It is true that on a CPU without appropriate atomic instructions and memory barriers, any kind of inter-thread communication becomes exceedingly difficult to implement. But with the right instructions, implementing the Ada rendez-vous mechanism is simple. Already an Intel 8088 would not have any difficulties in implementing this, while with 80486 and later CPUs maximum efficiency can be reached in such implementations.
While in Ada the so-called rendez-vous is the primitive used for inter-thread communication, it is a rather high-level mechanism, so it can be implemented with a lower-level primitive, which is the sending of a one-way message from one thread to another. One rendez-vous between two threads is equivalent with two one-way messages sent from one thread to another (i.e. from the 1st to the 2nd, then in the reverse direction). So implementing correctly the simpler mechanism of sending a one-way inter-thread message allows the trivial implementation of rendez-vous.
The rendez-vous mechanism has been put in the language specification, despite the fact that its place would have better been in a standard library, because this was mandated by the STEELMAN requirements published in 1978-06, one year before the closing of the DoD language contest.
So this feature was one of the last added to the language, because the Department of Defense requested it only in the last revision of the requirements.
An equivalent mechanism was described by Hoare in the famous CSP paper. However CSP was published a couple of months after the STEELMAN requirements.
I wonder whether the STEELMAN authors have arrived at this concept independently, or they have read a preprint of the Hoare paper.
It is also possible that both STEELMAN and Hoare have been independently inspired by the Interprocess Calls of Multics (1967), which were equivalent with the rendez-vous of Ada. However the very close coincidence in time of the CSP publication with the STEELMAN revision of the requirements makes plausible that a preprint of the Hoare paper could have prompted this revision.
The 286 worked perfectly fine. If you take a 16-bit unix and you run it on a 286 with enough memory then it runs fine.
Where it went wrong is in two areas: 1) as far as I know the 286 does not correct restart all instruction if they reference a segment that is not present. So swapping doesn't really work as well as people would like.
The big problem however was that in the PC market, 808[68] applications had access to all (at most 640 KB) memory. Compilers (including C compilers) had "far" pointers, etc. that would allow programs to use more than 64 KB memory. There was no easy way to do this in 286 protected mode. Also because a lot of programs where essentially written for CP/M. Microsoft and IBM started working on OS/2 but progress was slow enough that soon the 386 became available.
The 386 of course had the complete 286 architecture, which was also extended to 32-bit. Even when flat memory is used through paging, segments have to be configured.
The 286 worked perfectly fine as an improved 8086, for running MS-DOS, an OS designed for 8088/8086, not for 286.
Nobody has ever used the 286 "protected mode" in the way intended by its designers.
The managers of "extended memory", like HIMEM.SYS, used briefly the "protected mode", but only to be able to access memory above 1 MB.
There were operating systems intended for 286, like XENIX and OS/2 1.x, but even those used only a small subset of the features of the 286 "protected mode". Moreover, only a negligible fraction of the 286 computers have been used with OS/2 1.x or XENIX, in comparison with those using MS-DOS/DR-DOS.
True, but at that time it was already too late. C/C++ had won.
Moreover, for a very long time GNAT had been quite difficult to build, configure and coexist with other gcc-based compilers, far more difficult than building and configuring the tool chain for any other programming language. (i.e. you could fail to get a working environment, without any easy way to discover what went wrong, which never happened with any other programming language supported by gcc)
I have no idea which was the reason for this, because whichever was the reason it had nothing to do with any intrinsic property of the language.
I do not remember when it has finally become easy to use Ada with gcc, but this might have happened only a decade ago, or even more recently.
In the past you could easily use Ada or anything else from Linux under Cygwin.
Nowadays, you should be able to use anything from Linux under WSL.
In the past using Ada was more painful, because you had to use some old version of gcc, which could clash with the modern gcc used for C/C++/Fortran etc.
However, during the last few years these problems have disappeared. If you build any current gcc version, you must just choose the option of having ada among the available languages and all will work smoothly.
There are features common to Ada and Modula, but those have been taken by both languages from Xerox Mesa.
The first version of Modula was designed with the explicit goal of making a simple small language that provided a part of the features of Xerox Mesa (including modules), after Wirth had spent a sabbatical year at Xerox.
Nowadays Modula and its descendants are better known than Mesa, because Wirth and others have written some good books about it and because Modula-2 was briefly widely available for some microcomputers. Many decades ago, I had a pair of UVPROM memories (i.e. for a 16-bit data bus) that contained a Modula-2 compiler for Motorola MC68000 CPUs, so I could use a computer with such a CPU for programming in Modula-2 in the same manner how many early PCs could be used with their built-in BASIC interpreter. However, after switching to an IBM PC/AT compatible PC, I have not used the language again.
However, Xerox Mesa was a much superior language and its importance in the history of programming languages is much greater than that of Modula and its derivatives.
Ada has taken a few features from Pascal, but while those features were first implemented in Pascal, they had been proposed much earlier by others, e.g. the enumerated types of Pascal and Ada had been first proposed by Hoare in 1965.
When CLU is mentioned, usually Alphard must also be mentioned, as those were 2 quasi-simultaneous projects at different universities that had the purpose of developing programming languages with abstract data types. Many features have appeared first in one of those languages and then they have been introduced in the other after a short delay. Among the features of modern programming languages that come from CLU and Alphard are for-each loops and iterators.
Mesa was my first language that I used out of Collage for the seven years that I worked on the Xerox Star document editor. The job where I learned more in 6 months than I did in 4 years of collage or my entire working career afterwords.
It was by far the best language that I used for my entire working career where I had to endure such languages as PL/1 (and PL/S), C, C++, Java, JavaScript and PHP. While Java as a lang was not too bad it still paled in features and usability compared to MESA and it too was influenced by MESA.
But as was true at Xerox was it was the complete network that was revolutionary at the time in the early 80’s. The fact that I could source debug any machine remotely on the corporate would wide network of over 5000 machine and that the source code would be automatically done loaded to my machine (mean I could easily debug from any nearby random machine) was just something I could never “easily’ do elsewhere.
MESA was missing a few things (which CEDAR solved and used generally within only Xerox PARC partially because at the time it really only ran on Dorado class machine) such as Garbage collection and in the case of Star it would have been much better if the language supported OOP. For Star we had had a system called Traits to support objects but it had some serious issues IMHO (which would be fodder for a separate post.)
When talking about Mesa you also need to talk about Tajo, its development environment built onto- of the OS Pilot (Star also used Pilot.) But systems also supported a mouse and a large bitmapped monitor and had overlapping windows (although most of Star have automatic non overlapping windows that was a UI usability decision.)
There is also more because the network was very important. Print severs, file servers, mail servers, cloudless store for all of Star’s user files/desktop. All this in the very early 80’s was unheard of elsewhere. It’s very similarly to what Steve Jobs missed when he saw Smalltalk where he only really saw a new UI and missed much more that was demoed.
It was a magic place to work at the time, I had left in the very late 80s for Apple and it was a huge step backwards at the time (but did amazing stuff with their limited tools but made working not fun.)
Some information may also exist in other subdirectories of "pdf/xerox".
There have been many references to Mesa in the research articles and the books published towards the end of the seventies and during the eighties, but those are hard to find today, as most of them may have not been digitized. Even if they were digitized, it is hard to search through them to find the relevant documents, because you would not know from the title whether Mesa is also discussed along with other programming languages.
In general, bitsavers.org is probably the most useful Internet resource about old computing hardware and software, because no secondary literature matches the original manuals of the computer vendors, which in the distant past had an excellent quality, unlike today.
Ada provides many features of Mesa, but not all of them and I regret that some Mesa features are missing from the languages that are popular today.
The loops with double exits of Python (i.e. with "else") have been inspired by Mesa, but they provide only a small subset of the features available in Mesa loops.
JOVIAL had been in use within the US Air Force for more than a decade before the first initiative for designing a unique military programming language, which has resulted in Ada.
JOVIAL had been derived from IAL (December 1958), the predecessor of ALGOL 60. However JOVIAL was defined before the final version of ALGOL 60 (May 1960), so it did not incorporate a part of the changes that had occurred between IAL and ALGOL 60.
The timeline of Ada development has been marked by increasingly specific documents elaborated by anonymous employees of the Department of Defense, containing requirements that had to be satisfied by the competing programming language designs:
1975-04: the STRAWMAN requirements
1975-08: the WOODENMAN requirements
1976-01: the TINMAN requirements
1977-01: the IRONMAN requirements
1977-07: the IRONMAN requirements (revised)
1978-06: the STEELMAN requirements
1979-06: "Preliminary Ada Reference Manual" (after winning the competition)
Already the STRAWMAN requirements from 1975 contained some features taken from JOVIAL, which the US Air Force used and liked, so they wanted that the replacement language should continue to have them.
However, starting with the IRONMAN requirements, some features originally taken as such from JOVIAL have been replaced by greatly improved original features, e.g. the function parameters specified as in JOVIAL have been replaced by the requirement to specify the behavior of the parameters regardless of their implementation by the compiler, i.e. the programmer specifies behaviors like "in", "out" and "in/out" and the compiler chooses freely how to pass the parameters, e.g. by value or by reference, depending on which method is more efficient.
This is a huge improvement over how parameters are specified in languages like C or C++ and in all their descendants. The most important defects of C++, which have caused low performance for several decades and which are responsible for much of the current complexity of C++ have as their cause the inability of C++ to distinguish between "out" parameters and "in/out" parameters. This misfeature is the reason for the existence of a lot of unnecessary things in C++, like constructors as something different from normal functions, and which cannot signal errors otherwise than by exceptions, of copy constructors different from assignment, of the "move" semantics introduced in C++ 2011 to solve the performance problems that plagued C++ previously, etc.
The hardware description languages, even if they have a single language specification, are divided into 2 distinct subsets, one used for synthesis, i.e. for hardware design, and one used for simulation, i.e. for hardware verification.
The subset required for hardware synthesis/design, cannot be unified completely with a programming language, because it needs a different semantics, though the syntax can be made somewhat similar, as with VHDL that was derived from Ada, while Verilog was derived from C. However, the subset used for simulation/verification, outside the proper hardware blocks, can be pretty much identical with a programming language.
So in principle one could have a pair of harmonized languages, one a more or less typical programming language used for verification and a dedicated hardware description language used only for synthesis.
The current state is not too far from this, because many simulators have interfaces between HDLs and some programming languages, so you can do much verification work in something like C++, instead of SystemVerilog or VHDL. For instance, using C++ for all verification tasks is possible when using Verilator to simulate the hardware blocks.
I am not aware of any simulator that would allow synthesis in VHDL coupled with writing test benches in Ada, which are a better fit than VHDL with C++, but it could be done.
The origin of all sum types is in "Definition of new data types in ALGOL x", published by John McCarthy in October 1964, who introduced the keyword UNION for such types (he proposed "union" for sum types, "cartesian" for product types, and also operator overloading for custom types).
John McCarthy, the creator of LISP, had also many major contributions to ALGOL 60 and to its successors (e.g. he introduced recursive functions in ALGOL 60, which was a major difference between ALGOL 60 and most existing languages at that time, requiring the use of a stack for the local variables, while most previous languages used only statically-allocated variables).
The "union" of McCarthy and of the languages derived from his proposal is not the "union" of the C language, which has used the McCarthy keyword, but with the behavior of FORTRAN "EQUIVALENCE".
The concept of "union" as proposed by McCarthy was first implemented in the language ALGOL 68, then, as you mention, some functional languages, like Hope and Miranda, have used it extensively, with different syntactic variations.
Definitely if you don't have the C "union" user defined type you should use this keyword for your sum types. Many languages don't have this feature - which is an extremely sharp blade intended only for experts - and that's fine. You don't need an Abrams tank to take the kids to school, beginners should not learn to fly in the F-35A and the language for writing your CRUD app does not need C-style unions.
If Rust didn't have (C-style) unions then its enum should be named union instead. But it does, so they needed a different name. As we work our way through the rough edges of Rust maybe this will stick up more and annoy me, but given Rust 1.95 just finally stabilized core::range::RangeInclusive, the fix for the wonky wheel that is core::ops::RangeInclusive we're not going to get there any time soon.
Ada is a language that had a lot of useful features much earlier than any of the languages that are popular today, and some of those features are still missing from the languages easily available today.
In the beginning Ada has been criticized mainly for 2 reasons, it was claimed that it is too complex and it was criticized for being too verbose.
Today, the criticism about complexity seems naive, because many later languages have become much more complex than Ada, in many cases because they have started as simpler languages to which extra features have been added later, and because the need for such features had not been anticipated during the initial language design, adding them later was difficult, increasing the complexity of the updated language.
The criticism about verbosity is correct, but it could easily be solved by preserving the abstract Ada syntax and just replacing many tokens with less verbose symbols. This can easily be done with a source preprocessor, but this is avoided in most places, because then the source programs have a non-standard appearance.
It would have been good if the Ada standard had been updated to specify a standardized abbreviated syntax besides the classic syntax. This would not have been unusual, because several old languages have specified abbreviated and non-abbreviated syntactic alternatives, including languages like IBM PL/I or ALGOL 68. Even the language C had a more verbose syntactic alternative (with trigraphs), which has almost never been used, but nonetheless all C compilers had to support both the standard syntax and its trigraph alternative.
However, the real defect of Ada has been neither complexity nor verbosity, but expensive compilers and software tools, which have ensured its replacement by the free C/C++.
The so-called complexity of Ada has always been mitigated by the fact that besides its reference specification document, Ada always had a design rationale document accompanying the language specification. The rationale explained the reasons for the choices made when designing the language.
Such a rationale document would have been extremely useful for many other programming languages, which frequently include some obscure features whose purpose is not obvious, or which look like mistakes, even if sometimes there are good reasons for their existence.
When Ada was introduced, it was marketed as a language similar to Pascal. The reason is that at that time Pascal had become the language most frequently used for teaching programming in universities.
Fortunately the resemblances between Ada and Pascal are only superficial. In reality the Ada syntax and semantics are much more similar to earlier languages like ALGOL 68 and Xerox Mesa, which were languages far superior to Pascal.
The parent article mentions that Ada includes in the language specification the handling of concurrent tasks, instead of delegating such things to a system library (task = term used by IBM since 1964 for what now is normally called "thread", a term first used in 1966 in some Multics documents and popularized much later by the Mach operating system).
However, I do not believe that this is a valuable feature of Ada. You can indeed build any concurrent applications around the Ada mechanism of task "rendez-vous", but I think that this concept is a little too high-level.
It incorporates 2 lower level actions, and for the highest efficiency in implementations sometimes it may be necessary to have access to the lowest level actions. This means that sometimes using a system library for implementing the communication between concurrent threads may provide higher performance than the built-in Ada concurrency primitives.
Verbosity is a feature not a bug. Programming is a human activity and thus should use human language and avoid encoded forms that require decoding to understand. The use of abbreviations should be avoided as it obsfucates the meaning and purpose of code from a reader.
The programming community is strongly divided between those who believe that verbosity is a feature and not a bug and those who believe that verbosity is a bug and not a feature.
A reconciliation between these 2 camps appears impossible. Therefore I think that the ideal programming language should admit 2 equivalent representations, to satisfy both kinds of people.
The pro-verbose camp argues that they cannot remember many different symbols, so they prefer long texts using keywords resembling a natural language.
The anti-verbose camp, to which I belong, argues that they can remember mathematical symbols and other such symbols, and that for them it is much more important to see on a screen an amount of program as big as possible, to avoid the need of moving back and forth through the source text.
Both camps claim that what they support is the way to make the easiest to read source programs, and this must indeed be true for themselves.
So it seems that it is impossible to choose rules that can ensure the best readability for all program readers or maintainers.
My opinion is that source programs must not be stored and edited as text, but as abstract syntax trees. The program source editors and viewers should implement multiple kinds of views for the same source program, according to the taste of the user.
It is not that I cannot remember the symbols - I don't want to; I want the language to plainly explain itself to me. Furthermore every language has it's own set of unique symbols. For new readers to a language you first have to familiarize yourself with the new symbols. I remember my first few times reading rust... It still makes my head spin. I had to keep looking up what everything did. If the plain keyword doesn't directly tell you what it's doing at least it hints at it.
To be clear Ada specifically talks about all this in the Ada reference manual in the Introduction. It was specifically designed for readers as opposed to writers for very good reasons and it explains why. It's exactly one of the features other languages will eventually learn they need and will independently "discover" some number of years in the future.
Rust has a complex semantics, not a complicated syntax. The syntax was explicitly chosen to be quite C/C++ like while streamlining some aspects of it (e.g. the terrible type-ascription syntax, replaced with `let name: type`).
I agree that the use of symbols becomes a problem when you use many programming languages and each of them uses different symbols.
This has never been solved, but it could have been solved if there would have been a standard about the use of symbols in programming languages and all languages would have followed it.
Nevertheless, for some symbols this problem does not arise, e.g. when traditional mathematical symbols are used, which are now available in Unicode.
Many such symbols have been used for centuries and I hate their replacements that had to be chosen due to the constraints of the ASCII character set.
Some of the APL symbols are straightforward extensions of the traditional mathematical notation, so their use also makes sense.
Besides the use of mathematical symbols in expressions, classic or Iverson, the part where I most intensely want symbols, not keywords, is for the various kind of statement brackets.
I consider the use of a single kind of statement brackets as being very wrong for program readability. This was introduced in ALGOL 60 (December 1958) as the pair "begin" and "end". Other languages have followed ALGOL 60. CPl has replaced the statement brackets with paragraph symbols (August 1963), and then the language B (the predecessor of C) has transitioned to ASCII so it has replaced the CPL symbols with curly braces, sometimes around 1970.
A better syntax was introduced by ALGOL 68, which is frequently referred to as "fully bracketed syntax".
In such a syntax different kinds of brackets are used for distinct kinds of program structures, e.g. for blocks, for loops and for conditional structures. This kind of syntax can avoid any ambiguities and it also leads to a total number of separators, parentheses, brackets and braces that is lower than in C and similar languages, despite being "fully bracketed". (For instance in C you must write "while (condition) {statements;}" with 6 syntactic tokens, while in a fully bracketed language you would write "while condition do statements done", with only 3 syntactic tokens)
If you use a fully bracketed syntax, the number of syntactic tokens is actually the smallest that ensures a non-ambiguous grammar, but if the tokens are keywords the language can still appear as too verbose.
The verbosity can be reduced a lot if you use different kinds of brackets provided by Unicode, instead of using bracket pairs like "if"/"end if", "loop"/"end loop" or the like.
For instance, one can use curly braces for blocks, angle brackets for conditional expressions or statements, double angle brackets for switch/case, bag delimiters for loops, and so on. One could choose to use different kinds of brackets for inner blocks and for function bodies, and also different kinds of brackets for type definitions.
In my opinion, the use of many different kinds of brackets is the main feature that can reduce program verbosity in comparison with something like Ada.
Moreover, the use of many kinds of brackets is pretty much self describing, like also in HTML or XML. When you see the opening bracket, you can usually recognize what kind of pattern starts, e.g. that it is a function body, a loop, a block, a conditional structure etc., and you also know how the corresponding closing bracket will look. Thus, when you see a closing bracket of the correct shape you can know what it ends, even when you had not known previously the assignment between different kinds of brackets and different kinds of program structures.
In languages like C, it is frequently annoying when you see many closing braces and you do not know what they terminate. Your editor will find the matching brace, but that wastes precious time. You can comment the closing braces, but that becomes much more verbose than even Ada.
So for me the better solution is to use graphically-distinct brackets. Unicode provides many suitable bracket pairs. There are programming fonts, like JetBrains Mono, which provide many Unicode mathematical symbols and bracket pairs.
When I program for myself, I use such symbols and I use a text preprocessor before passing the program to a compiler.
I agree. I've never understood or accepted the claim that Ada is verbose. It's simply clear and expressive. If there were some alternative concise syntax for "Ada" then I would not want to use it (because it would not be Ada).
Because that is a joke, it proposes replacements only for a small set of Ada tokens and it is not clear how the proposal can be cleanly extended to the full set of Ada tokens.
Nevertheless in is possible to define a complete 1 to 1 mapping of all Ada syntactic tokens to a different set of tokens.
The resulting language will have exactly the same abstract syntax as Ada, so it is definitely exactly the same language, only with a different appearance.
For a seasoned Ada programmer, changing the appearance of the language may be repugnant, but for a newbie there may be no difference between two alternative sets of tokens, especially when the programmers are not native English speakers, so they do not feel any particular loyalty to words like "begin" and "loop", so they may not feel any advantage of using them instead of using some kind of brackets that would replace them.
I think there is a significant difference between choosing to use words (from some language) versus using brackets like {}, () and []. With nested brackets there are often debates over placement and it is usually less clear what scope is being ended by the closing bracket.
Indeed, the fact that is not clear what scope is being ended by the closing bracket is very serious.
This is why much more bracket pairs are needed in a programming language than the 3 pairs provided by ASCII.
Ada uses many pairs of brackets, but most of them are implemented with keywords, for instance if => end if, loop => end loop, and so on.
These long keyword-based brackets can be replaced with various Unicode bracket pairs that are graphically distinct.
Such brackets, for instance angle brackets instead of "if"/"end if", take much less space and they are also much more salient than keywords, which for me improves a lot the readability of the text.
Even if you do not know before that how the brackets are assigned, by reading the text you can discover very quickly the correspondence, because you can recognize what kind of structure is started by a certain kind of bracket, and then you know that when you will see a closing bracket of the same shape that is the end of the structure.
Verbosity is a feature for small self-contained programs, and a bug for everything else. As long as you're using recognizable mnemonics and not just ASCII line noise or weird unreadable runes (as with APL) terseness is no obstacle at all for a good programmer.
> Today, the criticism about complexity seems naive, because many later languages have become much more complex than Ada
I don’t think you really understand what you’re saying here. I have worked on an ada compiler for the best part of a decade. It’s one of the most complex languages there is, up there with C++ and C#, and probably rust
Mind you, that suggests that the sentence is at least half-true even if "much more complex" is a big overstatement, since Rust, "modern" C++ and the later evolutions of C# are all relatively recent. (What would have compared to Ada in complexity back in the day? Common Lisp, Algol 68?)
As a matter of general interest, what features or elements of Ada make it particularly hard to compile, or compile well? (And are there parts which look like they might be difficult to manage but aren't?)
You're right in your first part. Ada 83 is less complex than modern C++ or Rust. However Ada kept evolving, and a lot of complexity was added in later revisions, such as Ada 95, which added a kind of bastardized and very complex Java style object model layer.
Ada features that are hard to compile are very common in the language. It is generally a language that is hard to compile to efficient code, because rules were conceived in an abstract notion of what safety is. But in general Ada is an extremely over specified language, which leaves very little space to interpretation. You can check the Ada reference manual if you want, which is a use 1000 pages book (http://www.ada-auth.org/arm.html)
* Array types are very powerful and very complicated
* Tasking & threading are specified in the language, which seems good on paper, but the abstractions are not very efficient and of tremendous complexity to implement.
* Ada's generic model is very hard to compile efficiently. It was designed in a way that tried to make it possible to compile down both to a "shared implementation" approach, as well as down to a monomorphized approach. Mistakes were done down the line wrt the specification of generics which made compiling them to shared generics almost impossible, which is why some compiler vendors didn't support some features of the language at all.
* Ada scoping & module system is of immense complexity
* The type system is very vast. Ada's name & type resolution algorithm is extremely complex to implement. functions can be overloaded on both parameters & return types, and there is a enclosing context that determines which overloads will be used in the end. On top of that you have preferences rules for some functions & types, subtyping, derived types, etc ...
This is just what comes to mind on a late Friday evening :) I would say that the language is so complex that writing a new compiler is one of those herculean efforts that reach similar heights as writing a new C++ compiler.
That's just a fe
what do you mean under Ada's complexity?
E.g. C++ is really complex because of a lot of features which badly interoperate between themselves.
Is this true for the Ada lang/compiler? Or do you mean the whole complexity of ideas included in Ada - like proof of Poincaré conjecture complex for unprepared person.
Yes, Ada has a lot of the same kind of fractal complexity that C++ has, which derives from unforeseen interaction of some features with some other.
On top of that, as I said in another comment, features are extremely overspecified. The standard specifies what has to be done in every edge case, often with a specification that is not very practical to implement efficiently
None of the extended languages is the same as the same as the base language, in the same way like OpenMP C++ in not the same as C++ or OpenMP Fortran is not the same as Fortran or SYCL is not the same as C++.
The extended languages include both extensions and restrictions of the base language. In the part of a program that will run on a GPU you can do things that cannot be done in the base language, but there also parts of the base language, e.g. of C++, which are forbidden.
All these extended languages have the advantage that you can write in a single source file a complete multithreaded program, which has parts running concurrently on a CPU and part running concurrently on a GPU, but for the best results you must know the rules that apply to the language accepted by each of them. It is possible to write program that run without modification on either a CPU or a GPU, but this is paid by a lower performance on any of them, because such a program uses only generic language features that work on any of them, instead of taking advantage of specific features.
reply