Criticism of C++
Although C++ is one of the most widespread programming languages,[1] many prominent software engineers criticize C++ (the language, and its compilers) arguing that it is overly complex[2] and fundamentally flawed.[3] Among the critics have been: Rob Pike,[2] Joshua Bloch, Linus Torvalds,[3] Donald Knuth, Richard Stallman, and Ken Thompson. C++ has been widely adopted and implemented as a systems language through most of its existence. It has been used to build many pieces of important software such as operating systems, runtime systems, programming language interpreters, parsers, lexers, compilers, etc. ComplexityOne of the most often criticized points of C++ is its perceived complexity as a language, with the criticism that a large number of non-orthogonal features in practice necessitates restricting code to a subset of C++, thus eschewing the readability benefits of common style and idioms. As expressed by Joshua Bloch:[4]
Donald Knuth (1993, commenting on pre-standardized C++), who said of Edsger Dijkstra that "to think of programming in C++" "would make him physically ill":[5][6]
Ken Thompson, who was a colleague of Bjarne Stroustrup at Bell Labs, gives his assessment:[7][4]
Slow compile timesThe natural interface between source files in C and C++ are header files. Each time a header file is modified, all source files that include the header file should recompile their code. Header files are slow because they are textual and context-dependent as a consequence of the preprocessor.[8] C only has limited amounts of information in header files, the most important being struct declarations and function prototypes. C++ stores its classes in header files and they not only expose their public variables and public functions (like C with its structs and function prototypes) but also their private functions. This forces unnecessary recompilation of all source files which include the header file each time these private functions are edited. This problem is magnified where the classes are written as templates, forcing all of their code into the slow header files, which is the case with much of the C++ standard library. Large C++ projects can therefore be relatively slow to compile.[9] The problem is largely solved by precompiled headers in modern compilers or using the module system that was added in C++20; C++23 exposes the functionality of standard library through the standard modules.[10] Global format state of <iostream>C++
Here follows an example where an exception interrupts the function before #include <iostream>
#include <vector>
int main() {
try {
std::cout << std::hex
<< 0xFFFFFFFF << '\n';
// std::bad_alloc will be thrown here:
std::vector<int> vector(0xFFFFFFFFFFFFFFFFull);
std::cout << std::dec; // Never reached
// (using scopes guards would have fixed that issue
// and made the code more expressive)
}
catch (const std::exception& e) {
std::cout << "Error number: " << 10 << '\n'; // Not in decimal
}
}
It is even acknowledged by some members of the C++ standards body[13] that C++20 added std::cout << std::format("Error number: {}\n", 10);
which is not affected by the stream state. The design of C++23 added IteratorsThe philosophy of the Standard Template Library (STL) embedded in the C++ Standard Library is to use generic algorithms in the form of templates using iterators. Early compilers optimized small objects such as iterators poorly, which Alexander Stepanov characterized as the "abstraction penalty", although modern compilers optimize away such small abstractions well.[15] The interface using pairs of iterators to denote ranges of elements has also been criticized.[16][17] The C++20 standard library's introduction of ranges should solve this problem.[18] One big problem is that iterators often deal with heap allocated data in the C++ containers and become invalid if the data is independently moved by the containers. Functions that change the size of the container often invalidate all iterators pointing to it, creating dangerous cases of undefined behavior.[19][20] Here is an example where the iterators in the for loop get invalidated because of the #include <iostream>
#include <string>
int main() {
std::string text = "One\nTwo\nThree\nFour\n";
// Let's add an '!' where we find newlines
for (auto it = text.begin(); it != text.end(); ++it) {
if (*it == '\n') {
// it =
text.insert(it, '!') + 1;
// Without updating the iterator this program has
// undefined behavior and will likely crash
}
}
std::cout << text;
}
Uniform initialization syntaxThe C++11 uniform initialization syntax and std::initializer_list share the same syntax which are triggered differently depending on the internal workings of the classes. If there is a std::initializer_list constructor then this is called. Otherwise the normal constructors are called with the uniform initialization syntax. This can be confusing for beginners and experts alike.[21][12] #include <iostream>
#include <vector>
int main() {
int integer1{10}; // int
int integer2(10); // int
std::vector<int> vector1{10, 0}; // std::initializer_list
std::vector<int> vector2(10, 0); // std::size_t, int
std::cout << "Will print 10\n" << integer1 << '\n';
std::cout << "Will print 10\n" << integer2 << '\n';
std::cout << "Will print 10,0,\n";
for (const auto& item : vector1) {
std::cout << item << ',';
}
std::cout << "\nWill print 0,0,0,0,0,0,0,0,0,0,\n";
for (const auto& item : vector2) {
std::cout << item << ',';
}
}
ExceptionsThere have been concerns that the zero-overhead principle[22] is not compatible with exceptions.[12] Most modern implementations have a zero performance overhead when exceptions are enabled but not used, but do have an overhead during exception handling and in binary size due to the need to unwind the call stack. Many compilers support disabling exceptions from the language to save the binary overhead. Exceptions have also been criticized for being unsafe for state-handling. This safety issue has led to the invention of the RAII idiom,[23] which has proven useful beyond making C++ exceptions safe. Encoding of string literals in source-codeC++ string literals, like those of C, do not consider the character encoding of the text within them: they are merely a sequence of bytes, and the C++ The example program below illustrates the phenomenon. #include <iostream>
#include <string>
// note that this code is no longer valid in C++20
int main() {
// all strings are declared with the UTF-8 prefix
// file encoding determines the encoding of å and Ö
std::string auto_enc = u8"Vår gård på Öland!";
// this text is well-formed in both ISO-8859-1 and UTF-8
std::string ascii = u8"Var gard pa Oland!";
// explicitly use the ISO-8859-1 byte-values for å and Ö
// this is invalid UTF-8
std::string iso8859_1 = u8"V\xE5r g\xE5rd p\xE5 \xD6land!";
// explicitly use the UTF-8 byte sequences for å and Ö
// this will display incorrectly in ISO-8859-1
std::string utf8 = u8"V\xC3\xA5r g\xC3\xA5rd p\xC3\xA5 \xC3\x96land!";
std::cout << "byte-count of automatically-chosen, [" << auto_enc
<< "] = " << auto_enc.length() << '\n';
std::cout << "byte-count of ASCII-only [" << ascii << "] = " << ascii.length()
<< '\n';
std::cout << "byte-count of explicit ISO-8859-1 bytes [" << iso8859_1
<< "] = " << iso8859_1.length() << '\n';
std::cout << "byte-count of explicit UTF-8 bytes [" << utf8
<< "] = " << utf8.length() << '\n';
}
Despite the presence of the C++11 'u8' prefix, meaning "Unicode UTF-8 string literal", the output of this program actually depends on the source file's text encoding (or the compiler's settings - most compilers can be told to convert source files to a specific encoding before compiling them). When the source file is encoded using UTF-8, and the output is run on a terminal that's configured to treat its input as UTF-8, the following output is obtained: byte-count of automatically-chosen, [Vår gård på Öland!] = 22 byte-count of ASCII-only [Var gard pa Oland!] = 18 byte-count of explicit ISO-8859-1 bytes [Vr grd p land!] = 18 byte-count of explicit UTF-8 bytes [Vår gård på Öland!] = 22 The output terminal has stripped the invalid UTF-8 bytes from display in the ISO-8859 example string. Passing the program's output through a Hex dump utility will reveal that they are still present in the program output, and it is the terminal application that removed them. However, when the same source file is instead saved in ISO-8859-1 and re-compiled, the output of the program on the same terminal becomes: byte-count of automatically-chosen, [Vr grd p land!] = 18 byte-count of ASCII-only [Var gard pa Oland!] = 18 byte-count of explicit ISO-8859-1 bytes [Vr grd p land!] = 18 byte-count of explicit UTF-8 bytes [Vår gård på Öland!] = 22 One proposed solution is to make the source encoding reliable across all compilers. See alsoReferences
Works cited
Further reading
External links
|
Portal di Ensiklopedia Dunia