Do not attempt to define, undefine, or modify any of the library configuration macros. The library headers must match the way the libraries were built. Otherwise, your program might not compile, might not link, and probably will not run correctly.
-I
and -L
options?Specify the -I
option to point to directories that contain your project header files when these header files are not in the same directory as the files that include them, or to point to directories that contain header files for third-party libraries that you acquire. Specify the -L
options to point to directories that contain libraries that you build, or to third-party libraries that you acquire.
Never use -I
to point into /usr/include
or into the compiler installation area. Never use -L
to point into /lib
, /usr/lib
, or into the compiler installation area. The CC compiler driver knows the location of the system headers and libraries and follows the correct search order. You can cause the compiler to find the wrong headers or libraries by using -I
or -L
options that point into system directories.
libC.so.5
and libCrun.so.1
?The Oracle Solaris operating system ships with the version of these libraries current as of the Solaris release. However, due to bug fixes and some performance improvements, there are often patches to these libraries. These patches are always cumulative and always backward compatible.
Starting with C++ 5.5, the compiler does not use a template cache by default. Therefore we recommend that when you upgrade to C++ 5.5 or newer, you recompile your code by specifying -instances=global
in order to use the default template compilation model. Therefore, when you upgrade your compiler, you remove all -instances=extern
options and recompile your code in order to use the default template compilation model -instances=global
.
The template cache maintains a list of dependencies between the object files that the compiler generates, and the template instances in the cache. Note, however, that the compiler now only uses the template cache when you specify -instances=extern
. If you move or rename object files, or combine object files into a library, you lose the connection to the cache. Here are two alternatives:
-instances=extern
.
Do not do this:
example% CC -c -instances=extern f1.cc
example% mv f1.o /new/location/for/files
Do this instead:
example% CC -c -instances=extern f1.cc -o /new/location/for/files/f1.o
You can encapsulate the process in makefile macros.
CC -xar
. Each archive then contains all the template instances used by the objects in the archive. You then link those archives into the final program. Some template instances are duplicated in different archives, but the linker keeps only one of each.
example% CC -c -instances=extern f1.cc f2.cc f3.cc
example% CC -xar f1.o f2.o f3.o -o temp1.a
example% CC -c -instances=extern f4.cc f5.cc f6.cc
example% CC -xar f4.o f5.0 f6.0 -o temp2.a
example% CC -c -instances=extern main.cc
example% CC main.o temp1.a temp2.a -o main
Every compiler predefines some macros that identify it. Compiler vendors tend to keep these predefined macros stable from release to release, and we in particular document them as a stable public interface.
A good way to find out what compiler you have is to write a small program that tests for predefined macros and outputs a string suitable for your intended use. You can also write a pseudo-program and compile it with -E
(or the equivalent for other compilers).
See 'macros' in the index of the C++ User's Guide for a list of predefined C++ compiler macros. In particular, the value of __SUNPRO_CC.
Starting with C++ 5.10 the value is a 4-digit hex number. The first digit is the major release, the next two digits are the minor release, the fourth is the micro release. Example: C++ 5.12 is 0x5120
.
For earlier releases 5.0 through 5.9, __SUNPRO_CC is a three-digit hex number. The first digit is the major release. The second digit is the minor release. The third digit is the micro release. For example, C++ 5.9 is 0x590.
For any two releases, the value of__SUNPRO_CC
is greater in the later release. To determine the exact release, you must first test for less than 0x5100
for releases prior to 5.10, then you can unpack the 3- or 4-digit value.
Here are predefined macros of interest:
#ifdef __sun
Oracle Solaris or other SunOS derived operating system
#endif
#ifdef __linux
Linux operating system
#endif
#ifdef __SUNPRO_C
Oracle Solaris Studio C compiler /* __SUNPRO_C value is the version
number */
#endif
#ifdef __SUNPRO_CC
Oracle Solaris Studio C++ compiler /* __SUNPRO_CC value is the version
number */
#endif
#ifdef __sparc
generate code for SPARC (R) architecture (32-bit or 64-bit)
#endif
#ifdef __sparcv9
generate code for 64-bit SPARC architecture
#endif
#ifdef __i386
generate code for 32-bit x86 architecture
#endif
#ifdef __amd64
generate code for 64-bit x64 architecture
#endif
First, a definition: "Upward compatible" means that object code compiled with an older compiler can be linked with code from a later compiler, as long as the compiler that is used in the final link is the latest compiler in the mix.
The C++ 4.0, 4.1, and 4.2 compilers are upward compatible. (There are some "name mangling" issues among the compiler versions that are documented in the C++ 4.2 manuals.)
The 5.0 through 5.11 compilers are upward compatible with the 4.2 compiler in -compat=4
mode . The actual object code from the 4.2 compiler is fully compatible with the object code from the current version through version 5.0, but debugging information (stabs) emitted by later compilers is not compatible with earlier debuggers.
Beginning with C++ 5.12, -compat=4 mode is no longer available.
Code compiled in -compat=5 mode by the current C++ compiler through version 5.0 are upward compatible. The actual object code is fully compatible, but debugging information (stabs) emitted by later compilers is not compatible with earlier debuggers.
Code compiled by C++ 5.12 in -compat=g mode might not be binary compatible with code compiled by C++ 5.13 or 5.14 in -compat=g or -std=c++03 mode.
Code compiled by C++ 5.13 in -std=c++03 or -std=c++11 mode is binary compatible with code compiled by C++ 5.14 in -std=c++03 or -std=c++11 mode, and with code compiled by C++ 5.14 in -std=c++14 mode. (Note: Code is binary-compatibe by default when using the gcc 4.x library ABI. If you select the gcc 5.x library ABI via the command-line option -D_GLIBCXX_USE_CXX11_ABI=1, code compiled by C++ 5.13 or 5.14 will not be binary compatible.
libCrun.so.1
?The Oracle Solaris operating system ships with the version of these libraries current as of the Solaris release. However, due to bug fixes and some performance improvements, there are often patches to these libraries. These patches are always cumulative and always backward compatible.
New math functions available in recent Solaris versions can cause previously valid code to become invalid.
Prior to Oracle Solaris 10, the functions in <math.h> had declarations only for type double. Oracle Solaris 10 and updates to Solaris 8 and 9 headers and library have overloads for types float and long double as well. To avoid an ambiguous call you might need to add explicit casts when calling these functions with integer arguments. For example:
#include <math.h>
extern int x;
double z1 = sin(x); // now ambiguous
double z2 = sin( (double)x ); // OK
float z3 = sin( (float)x ); // OK
long double z4 = sin( (long double)x ); // OK
The Solaris patches listed below provide full ANSI C++ <cmath> and <math.h> library support as implemented in the libm patch for Solaris 8 and 9.
You can use the -xlang={f90|f95|f77}
option. This option tells the driver to figure out exactly which libraries need to be on the link line and to figure out the order in which they need to appear.
The -xlang
option is not available for the C compiler. To mix C and Fortran routines, you must compile them with cc
and link them using the Fortran linker.
foo.cc
when I'm not compiling or including foo.cc
in my program?When the option -template=extdef
is in effect and a header file foo.h
has template declarations, the compiler searches for a file foo
with a C++ file extension (foo.c
, foo.cc
, foo.C
, foo.cpp
, foo.c++
) and includes it automatically if it is found. Refer to the C++ User's Guide section titled "Template Definitions Searching" for details.
If you have a file foo.cc
that you don't intend to be treated this way, you have two options:
-template=no%extdef
option. However, that option disables all searches for separate template definitions. You must then include all template definitions explicitly in your code so you cannot use the definitions-separate model.
Refer to the C++ User's Guide sections 5.2.1 and 5.2.2 for further discussion of the template definitions model or refer to the index of the C++ User's Guide for pointers to descriptions of the definitions separate and definitions included models.
foo.i
file that is generated from the -P
preprocessing option?See the previous answer above.
For the case where you want to compile a .i file, just rename the file to give it a unique name. Then you don't have to disable separate template compilation. For example:
CC -P foo.cc
mv foo.i foo_prep.i
CC -c foo_prep.i
The default compiler-address code-model for V9 is -xcode=abs44
, which improves performance over the previous -xcode=abs64
. However, this code model is not usable within dynamic libraries. There are two solutions to the problem.
-xcode=pic13
or -xcode=32
. This method is preferred, and nearly always the right thing.
-xcode=abs64
. This method results in dynamic libraries that are not sharable. Each process must rewrite the library as it is copied into separate areas of memory. The method is useful for applications that run for a very long time under tight performance constraints and low system sharing.
When the option -instances=extern
is in effect, there are two main causes for the "lock attempt failed" error message about the template cache:
umask
(1) man page for more information. In particular, you must be sure that the umask of a process that creates the cache or files in it allows writing by other processes that need to access the same cache. If the directory is mounted on an NFS file system, the system must be mounted for read/write.
Unless you have a specific reason for wanting to use the -instances=extern
model, we recommend using the default -instances=global
model. The template cache is a frequent source of hard-to-find build problems. Delete all template caches, remove -instances=extern
from all command lines, and rebuild the project. Building should be faster, with fewer problems.
The linker warns about the pair of weak symbols that have different types when you link libm.so.2
and the classic iostream library in the same program. You can ignore the warning.
Beginning with Solaris 10, the default math library is libm.so.2
, and it contains the complex log function clog
in the global namespace as required by the C99 standard. If you use C++ classic iostreams by specifying -library=iostream
, you get the buffered standard error stream 'clog' in the global namespace. (Standard iostreams does not have this conflicting symbol.)
We have adjusted headers and libraries to silently rename each of these 'clog' symbols so that you can use both in one program. However, we must retain the original symbol spellings as weak symbols in each of the libraries, so that old binaries looking for the original symbols can continue to link.
Be sure to get iostream and math declarations by including the appropriate system headers rather than declaring any of these entities yourself.
abs()
is ambiguous?The C++ Standard in section 26.5 requires the following overloads of the abs
function:
<stdlib.h>
and <cstdlib>
int abs(int);
long abs(long);
<math.h>
and <cmath>
float abs(float);
double abs(double);
long double abs(long double);
Until some updates of Solaris 8, the only version of abs
available on Solaris was the traditional int
version. If you invoked abs
with any numeric type, the value was implicitly converted to type int
, and the int
version of abs
was called, assuming that you included <stdlib.h>
or <cstdlib>
.
Solaris headers and libraries now comply with the C++ standard regarding math functions.
If you include, for example, <math.h>
but not <stdlib.h>
, and invoke abs
with an integer argument, the compiler must choose among the three floating-point versions of the functions. An integer value can be converted to any of the floating-point types, and neither conversion is preferred over the others. Reference: C++ standard section 13.3.3. The function call is therefore ambiguous. You will get an ambiguity error using any compiler that conforms to the C++ Standard.
If you invoke the abs
function with integer arguments, you should include standard header <stdlib.h>
or <cstdlib>
to be sure you get the correct declarations for it. If you invoke abs
with floating-point values, you should also include <math.h>
or <cmath>
.
Here's a simple recommended programming practice: if you include <math.h>
or <cmath>
, also include <stdlib.h>
or <cstdlib>
.
Similar considerations apply to other math functions, like cos
or sqrt
. Solaris headers and libraries now comply with the C++ Standard, supplying float, double, and long double overloaded versions of the functions. If you invoke, for example, sqrt
with an integer value, the code formerly compiled because only one version of sqrt
was available. With three floating-point versions available, you must cast the integer value to the floating-point type that you want.
double root_2 = sqrt(2); // error
double root_2 = sqrt(2.0); // OK
double x = sqrt(int_value); // error
double x = sqrt(double(int_value)); // OK
The compiler creates a temporary object sometimes for convenience, and sometimes because the language rules require it. For example, a value returned by a function is a temporary object, and the result of a type conversion is a temporary object.
The original C++ rule was that the temporary object ("temp") could be destroyed at any time up until the end of the block in which it was created. Sun C++ compilers prior to C++ 5.9 destroyed temps at the end of the block (closing right brace).
The rule in the C++ standard is that temps are destroyed at the end of the complete expression in which the temp is created. Usually that is at the end of the statement in which the expression appears. Beginning with C++ 5.9, the compiler follows this rule by default.
If you find that your program depends (perhaps unintentionally) on temps living until the end of the block, you can use the option -features=no%tmplife to restore the old compiler behavior. Portable code should not depend on the old rule for lifetimes of temporaries, since few other compilers have the old behavior, even as an option.
On Solaris, standard header <math.h> has a declaration for a struct "exception", as required by standard Unix. If you bring the C++ standard exception class into global scope with a using-declaration or using-directive, it creates a conflict.
// Example 1
#include <math.h>
#include <exception>
using namespace std; // using-declaration
exception E; // error, exception is ambiguous
// Example 2:
#include <math.h>
#include <exception>
using std::exception; // using-directive
exception E; // error, multiple declaration for exception
Name resolution is slightly different for using-declarations compared to using-directives, so the error messages are not quite the same.
Workarounds:
using std::exception;
when you also use <math.h>. Write std::exception explicitly, or use a typedef, to access the standard exception class as in this example:
#include <math.h>
#include <exception>
std::exception E; // OK
typedef std::exception stdException; // OK
stdException F; // OK
using namespace std;
.
The C++ namespace std contains so many names that you are likely to have conflicts with application code or third-party libraries when you use this directive in real-world code. (Books and articles about C++ programming sometimes have this using-directive to reduce the size of small examples.) Use individual using-declarations or explicitly qualify names.
A C++ rule enforced by the C++ compiler since version 5.3 is that a virtual function in a derived class can allow only the exceptions that are allowed by the function it overrides. The overriding function can be more restrictive, but not less restrictive. Consider the following example:
class Base {
public:
// might throw an int exception, but no others
virtual void f() throw(int);
};
class Der1 : public Base {
public:
virtual void f() throw(int); // ok, same specification
};
class Der2 : public Base {
public:
virtual void f() throw(); // ok, more restrictive
};
class Der3 : public Base {
public:
virtual void f() throw(int, long); // error, can't allow long
};
class Der4 : public Base {
public:
virtual void f() throw(char*); // error, can't allow char*
};
class Der5 : public Base {
public:
virtual void f(); // error, allows any exception
};
This code shows the reason for the C++ rule:
#include "base.h" // declares class Base
void foo(Base* bp) throw()
{
try {
bp->f();
}
catch(int) {
}
}
Since Base::f() is declared to throw only an int exception, function foo can catch int exceptions, and declare that it allows no exceptions to escape. Suppose someone later declared class Der5, where the overriding function could throw any exception, and passed a Der5 pointer to foo. Function foo would become invalid, even though nothing is wrong with the code visible when function foo is compiled.
Starting with C++ 5.5, the compiler does not use a template cache by default. Therefore we recommend that when you upgrade to C++ 5.5 or newer, you recompile your code by specifying -instances=global
in order to use the default template compilation model. Therefore, when you upgrade your compiler, you remove all -instances=extern
options and recompile your code in order to use the default template compilation model -instances=global
.
The template cache maintains a list of dependencies between the object files that the compiler generates, and the template instances in the cache. Note, however, that the compiler now only uses the template cache when you specify -instances=extern
. If you move or rename object files, or combine object files into a library, you lose the connection to the cache. Here are two alternatives:
-instances=extern
.
Do not do this:
example% CC -c -instances=extern f1.cc
example% mv f1.o /new/location/for/files
Do this instead:
example% CC -c -instances=extern f1.cc -o /new/location/for/files/f1.o
You can encapsulate the process in makefile macros.
CC -xar
. Each archive then contains all the template instances used by the objects in the archive. You then link those archives into the final program. Some template instances are duplicated in different archives, but the linker keeps only one of each.
example% CC -c -instances=extern f1.cc f2.cc f3.cc
example% CC -xar f1.o f2.o f3.o -o temp1.a
example% CC -c -instances=extern f4.cc f5.cc f6.cc
example% CC -xar f4.o f5.0 f6.0 -o temp2.a
example% CC -c -instances=extern main.cc
example% CC main.o temp1.a temp2.a -o main
+w2
and not when I use +w2
+d
?The C++ compiler has two kinds of inlining: C++ inline function inlining, which is done by the parser, and optimization inlining, which is done by the code generator. The C and Fortran compilers have only optimization inlining. (The same code generator is used for all compilers on a platform.)
The C++ compiler's parser attempts to expand inline any function that is declared implicitly or explicitly as inline. If the function is too large, the parser emits a warning only when you use the +w2
option. The +d
option prevents the parser from attempting to inline any function. This is why the warning disappears when you use +d
. (The -g
option also turns off the inlining of C++ inline functions.) The -xO
options do not affect this type of inlining.
The optimization inlining does not depend on the programming language. When you select an optimization level of -xO4
or higher, the code generator examines all functions, independent of how they were declared in source code, and replaces function calls with inline code wherever it thinks the replacement will be beneficial. No messages are emitted about optimization inlining (or its failure to inline functions). The +d
option does not affect optimization inlining.
fprintf("%s",NULL)
cause a segmentation fault?Some applications erroneously assume that a null character pointer should be treated the same as a pointer to a null string. A segmentation violation occurs in these applications when a null character pointer is accessed.
There are several reasons for not having the *printf()
family of functions check for null pointers. These include, but are not limited to the following reasons:
printf()
is OK. printf("%s", pointer)
needs to have pointer
point to a null terminated array of characters. printf()
and the program drops core, it is easy to use a debugger to find which printf()
call gave the bad pointer. However, if printf()
hid the bug by printing "(null pointer)," then other programs in a pipeline are likely to try interpreting "(null pointer)" when they are expecting some real data. At that point it may be impossible to determine where the real problem is hidden.
If you have an application that passes null pointers to *printf
, you can use a special shared object on Solaris /usr/lib/0@0.so.1
that provides a mechanism for establishing a value of 0 at location 0. Because this library masks all errors involving the dereference of a null pointer of any type, you should use this library only as a temporary workaround until you can correct the code.
sqrt()
, I get different signs for the imaginary part of the square root of a complex number. What's the reason for this?The implementation of this function is aligned with the C99 csqrt
Annex G specification. For example, here's the output from the following code example :
complex sqrt (3.87267e-17, 0.632456)
float sqrt(3.87267e-17, -0.632456)
#include <iostream.h>
#include <math.h>
#include <complex.h>
int main ()
{
complex ctemp(-0.4,0.0);
complex c1(1.0,0.0);
double dtemp(-0.4);
cout<< "complex sqrt "<< sqrt(ctemp)<<endl;
cout<< "float sqrt "<< sqrt(c1*dtemp)<<endl;
}
#include <iostream>
#include <math.h>
#include <complex>
using namespace std;
int main ()
{
complex<double> ctemp(-0.4,0.0);
complex<double> c1(1.0,0.0);
double dtemp(-0.4);
cout<< "complex sqrt "<< sqrt(ctemp)<<endl;
cout<< "float sqrt "<< sqrt(c1*dtemp)<<endl;
}
The sqrt function for complex is implemented using atan2. The following example illustrates the problem by using atan2. The output of this program is:
c=-0.000000 b=-0.400000 atan2(c, b)=-3.141593
a=0.000000 b=-0.400000 atan2(a, b)=3.141593
In one case, the output of atan2 is negative and in the other case it's positive. It depends on whether -0.0 or 0.0 gets passed as the first argument.
#include <stdio.h>
#include <math.h>
int main()
{
double a = 0.0;
double b = -0.4;
double c = a*b;
double d = atan2(c, b);
double e = atan2(a, b);
printf("c=%f b=%f atan2(c, b)=%f\n", c, b, d);
printf("a=%f b=%f atan2(a, b)=%f\n", a, b, e);
}
A "pure virtual function called" message always arises because of an error in the program. The error occurs in either of the following two ways:
class Abstract;
void f(Abstract*);
class Abstract {
public:
virtual void m() = 0; // pure virtual function
Abstract() { f(this); } // constructor passes "this"
};
void f(Abstract* p)
{
p->m();
}
When f
is called from the Abstract constructor, "this" has the type "Abstract*", and function f
attempts to call the pure virtual function m
.
class Abstract {
public:
virtual void m() = 0; // body provided later
void g();
};
void Abstract::m() { ... } // definition of m
void Abstract::g()
{
m(); // error, tries to call pure virtual m
Abstract::m(); // OK, call is fully qualified
}
The C++ rule is that overloading occurs only within one scope, never across scopes. A base class is considered to be in a scope that surrounds the scope of a derived class. Any name declared in a derived class therefore hides, and cannot overload, any function in a base class. This fundamental C++ rule predates the ARM.
If another compiler does not complain, it is doing you a disservice, because the code will not behave as you probably expect. Our compiler issues a warning while accepting the code. (The code is legal, but probably does not do what you want.)
If you wish to include base-class functions in an overloaded set, you must do something to bring the base-class functions into the current scope. You can add a using-declaration:
class Base {
public:
virtual int foo(int);
virtual double foo(double);
};
class Derived : public Base {
public:
using Base::foo; // add base-class functions to overload set
virtual double foo(double); // override base-class version
};
Do not attempt to define, undefine, or modify any of the library configuration macros. The library headers must match the way the libraries were built. Otherwise, your program might not compile, might not link, and probably will not run correctly.
-I
and -L
options?Specify the -I
option to point to directories that contain your project header files when these header files are not in the same directory as the files that include them, or to point to directories that contain header files for third-party libraries that you acquire. Specify the -L
options to point to directories that contain libraries that you build, or to third-party libraries that you acquire.
Never use -I
to point into /usr/include
or into the compiler installation area. Never use -L
to point into /lib
, /usr/lib
, or into the compiler installation area. The CC compiler driver knows the location of the system headers and libraries and follows the correct search order. You can cause the compiler to find the wrong headers or libraries by using -I
or -L
options that point into system directories.
stdlib
) that is fully compliant? What functionality does the current libCstd
not support?In -compat=5 mode, the default library, libCstd, is forward and backward compatible with all of the C++ compilers from 5.0 through 5.12. It is not very standard-conforming, however (see the following questions). If you need better conformance to the C++ standard but do not need compatibility with libCstd, you have these options:
1. Use the supplied STLport library. This library has good conformance to the C++ standard, except that it lacks support for locales (for use in I18N and L10N programming). To use this library, recompile the entire program, including any C++ libraries that are linked to it, using the option -library=stlport4 on every CC command line. Be sure to link the program with CC, not directly with ld.
2. On Oracle Solaris 10 update 10 or Solaris 11, you can use the Apache stdcxx library which is very standard-conforming, including support for locales. To use this library, recompile the entire program, including any C++ libraries that are linked to it, using the option -library=stdcxx4 on every CC command line. Be sure to link the program with CC, not directly with ld.
3. You can compile in g++ compatibility mode, options -compat=g or std=c++03 , and use the g++ runtime library libstdc++. Recompile the entire program, including any C++ libraries that are linked to it, using the option on every CC command line. Be sure to link the program with CC, not directly with ld.
libCstd
?The standard library was originally (in C++ 5.0) built without support for features which required member template and partial specialization in the compiler. Although these features have been available since C++ 5.1, they cannot be turned on in the standard library because they would compromise backward compatibilitiy. The following is a list of missing functionality for each disabled feature.
In <algorithm
>, the following template functions (non-member) are not supported:
count(), count_if()
In <iterator>
, the following templates are not supported:
template <class Iterator> struct iterator_traits {}
template <class T> struct iterator_traits<T*> {}
template <class T> struct iterator_traits<const T*>{}
template typename iterator_traits::difference_type distance(InputIterator first, InputIterator last);
complex
in <complex>
:
<code>
template <class X> complex<T>& operator= (const complex<X>& rhs)
template <class X> complex<T>& operator+= (const complex<X>& rhs)
template <class X> complex<T>& operator-= (const complex<X>& rhs)
template <class X> complex<T>& operator*= (const complex<X>& rhs)
template <class X> complex<T>& operator/= (const complex<X>&)
</code>
pair
in <utility>
:
template<class U, class V> pair(const pair<U, V> &p);
locale
in <locale>
:
template <class Facet> locale combine(const locale& other);
auto_Ptr
in <memory>
:
auto_ptr(auto_ptr<Y>&);
auto_ptr<Y>& operator =(auto_ptr<Y>&);
template <class Y> operator auto_ptr_ref<Y>();
template <class Y> operator auto_ptr<Y>();
list
in <list>
:
Member template sort.
Template constructors.
In class auto_ptr
in <memory>
:
template <class Y> class auto_ptr_ref{};
auto_ptr(auto_ptr(ref<X>&);
In <deque
>, <map
>, <set
>, <string
>, <vector
> and <iterator
> the following template functions (non-member) are not supported:
map
, multimap
, set
, multiset
, basic_string
, vector
, reverse_iterator
, and istream_iterator
:
bool operator!= ()
map
, multimap
, set
, multiset
, basic_string
, vector
and reverse_iterator
:
bool operator> ()
bool operator>= ()
bool operator<= ()
map
, multimap
, set
, multiset
, basic_string
, and vector
:
void swap()
Some code that is valid according to the C++ standard will not compile.
The most common example is creating maps where the first element of the pair could be const
but isn't declared that way. The member constructor template would convert pair<T, U>
to pair<const T, U>
implicitly when needed. Because that constructor is missing, you get compilation errors instead.
Since you are not allowed to change the first member of a pair in a map anyway, the simplest fix is to use an explicit const
when creating the pair type. For example, instead of pair<int, T>
use pair<const int, T>
; instead of map<int, T>
use map<const int, T>
.
Using precompiled headers does not guarantee faster compile times. Precompiled headers impose some overhead that is not present when you compile files directly. To gain a performance advantage, the precompiled headers must have some redundancy that precompilation can eliminate.
For example, a program that is highly likely to benefit from precompilation is one that includes many system headers, iostreams, STL headers, and project headers. Those files contain conditionally-compiled code. Some headers are included multiple times, and the compiler must scan over the entire file if only to discover there is nothing to do in the redundant includes. System headers typically have hundreds of macros to expand.
Using a precompiled header means opening one file instead of dozens. The multiple includes that do nothing are eliminated, as are comments and extra white space. The macros in the headers are pre-expanded. Typically, these savings add up to a significant reduction in compile time.
The size of the file is probably not the issue so here are three likely causes for the delay.
Large functions at high optimizations take a long time to process, and can require lots of memory. If the code uses large macros extensively, a function that looks small might become very large after macro expansion.
Try compiling without any optimization (no -xO? or -O? option). If the compilation completes quickly, the problem is probably one or more very large functions in the file, and the time and memory necessary to optimize it.
In addition, make sure the computer used for compilation has plenty of physical memory for the compilation run. If you don't have enough memory, the optimization phase can cause thrashing.
Inline functions (in C and C++) act like macros where compilation time is concerned. When a function call is expanded inline, it can turn into a lot of code. The compiler then is dealing with one large function instead of 2 or more small functions.
Compilations often proceed more quickly when you disable function inlining. Of course, the resulting code will probably run more slowly.
See the description of -xinline
and "Using Inline Functions" in the C++ User's Guide for more information.
C++ templates cause the compiler to generate code based on the templates invoked. One line of source code can require the compiler to generate one or more template functions. It's not that templates themselves slow down compilation significantly, but that the compiler has more code to process than is apparent by looking at the original source code.
For example, if it were not for the standard library already having the functions, this line of code
cout << "value = " << x << endl;
would cause the compiler to generate 241 functions.
Prior to C++ 5.10, the compiler emitted debug data in "stabs" format by default. Stabs are kept in the individual .o files, and are not copied into the executable program. The program has a small set of "index stabs" that point to the .o files containing the debug data. To debug a program, the .o files must be available, and in the same location as when the program was built.
Beginning with C++ 5.10, debug data is emitted in the industry-standard "dwarf" format. Dwarf format does not allow the separation of debug data from the executable program, so all the debug data is copied into the executable. As a result, you don't need to have .o files to debug the program.
You can see a similar effect with stabs, by building with the -xs option. All stabs data is copied into the executable program. You should find that the executable is similar in size when using dwarf as when using stabs and -xs
. Beginning with C++ 5.11, stabs data is not fully supported.
The compiler itself is not multithreaded. You can expect better performance with MP systems, because the computer always has many other processes running at the same time as any one compilation.
If you use dmake
(one of the tools that ships with the compiler), you can run multiple compilations simultaneously.
See also the -xipo
option in the C++ Users Guide. When optimizing across multiple object files, multiple copies of the code generator can run at the same time, if the computer has sufficient resources.
You can either specify C++ compiler option -sync_stdio=no
at link time to fix this problem or add a call to the sync_with_stdio(false)
function and recompile.
The major performance problem with stdlib
2.1.1 is that it synchronizes C stdio with C++ streams by default. Each output to cout is flushed immediately. If your program does a lot of output to cout but not to stdout, the excess buffer flushing can add significantly to the run-time performance of the program. The C++ standard requires this behavior, but not all implementations meet the standard. The following program demonstrates the synchronization problem. It must print "Hello beautiful world" followed by a newline:
#include <iostream>
#include <stdio.h>
int main()
{
std::cout << "Hello ";
printf("beautiful ");
std::cout << "world";
printf("\n");
}
If cout and stdout are independently buffered, the output could be scrambled.
If you cannot recompile the executable, specify the new C++ compiler option -sync_stdio=no
at link time. This option causes sync_with_stdio( )
to be called at program initialization before any program output can occur.
If you can recompile, add a call to the sync_with_stdio(false)
function before any program output thereby specifying that the output does not need to be synchronized. Here is a sample call:
#include <iostream>
int main(int argc, char** argv)
{
std::ios::sync_with_stdio(false);
}
The call to sync_with_stdio
should be the first one in your program.
See the C++ User's Guide or the C++ man page CC
(1) for more information on -sync_stdio
.
inline
keyword? Why didn't I see functions inlined even though I wrote them that way?Fundamentally, the compiler treats the inline
declaration as a guidance and attempts to inline the function. However, there are still cases where it will not succeed. The restrictions are:
For example, expressions used in static variable initialization are only executed once and thus function calls in those expressions are not expanded. Note that the inline function func
might not be expanded when called in an initialization expression of static variables, it could still be inlined in other places. Similarly, function calls in exception handlers might not be expanded, because those code is rarely executed.
func1
calls func2
, and func2
calls func3
, and so forth. Even if each of these functions is small and there are no recursive calls, the combined expanded size could be too large for the compiler to expand all of them.
Many standard template functions are small, but have deep call chains. In those cases, only a few levels of calls are expanded.
goto
statements, loops, and try
/catch
statements are not inlined by the compiler. However, they might be inlined by the optimizer at the -xO4
level.
Note that in some previous versions, functions with complicated if-statements and return-statements could not be inlined. This limitation has been removed. Also, the default limitation on inline function size has been raised. With some programs, these changes will cause more functions to be inlined and can result in slower compilations and more code generation.
To completely eliminate the inlining of C++ inline functions, use the +d
option.
Separately, the optimizer inlines functions at higher optimization levels (-xO4
) based on the results of control flow and so forth. This inlining is automatic and is done irrespective of whether you declare a function "inline" or not.
April 2011 (updated by Stephen Clamage, June 2016)