The -fp-model (Linux* and Mac OS* X) or /fp (Windows*) option allows you to control the optimizations on floating-point data. You can use this option to tune the performance, level of accuracy, or result consistency for floating-point applications across platforms and optimization levels.
For applications that do not require support for denormalized numbers, the -fp-model or /fp option can be combined with the -ftz (Linux*and Mac OS* X) or /Qftz (Windows*) option to flush denormalized results to zero in order to obtain improved runtime performance on processors based on all Intel architectures (IA-32, Intel® 64, and IA-64 architectures).
You can use keywords to specify the semantics to be used. Possible values of the keywords are as follows:
Keyword |
Description |
---|---|
precise |
Enables value-safe optimizations on floating-point data. |
fast[=1|2] |
Enables more aggressive optimizations on floating-point data. |
strict |
Enables precise and except, disables contractions, and enables pragma stdc fenv_access. |
source |
Rounds intermediate results to source-defined precision and enables value-safe optimizations. |
double |
Rounds intermediate results to 53-bit (double) precision and enables value-safe optimizations. |
extended |
Rounds intermediate results to 64-bit (extended) precision and enables value-safe optimizations. |
[no-]except (Linux* and Mac OS* X) or |
Determines whether strict floating-point exception semantics are used. |
The default value of the option is -fp-model fast=1 or /fp:fast=1, which means that the compiler uses more aggressive optimizations on floating-point calculations.
Using the default option keyword -fp-model fast or /fp:fast, you may get significant differences in your result depending on whether the compiler uses x87 or SSE2 instructions to implement floating-point operations. Results are more consistent when the other option keywords are used.
Several examples are provided to illustrate the usage of the keywords. These examples show:
A small example of source code
Note that the same source code is considered in all the included examples.
The semantics that are used to interpret floating-point calculations in the source code
One or more possible ways the compiler may interpret the source code
Note that there are several ways the compiler may interpret the code; we show just some of these possibilities.
Example source code:
float t0, t1, t2;
...
t0 = 4.0f + 0.1f + t1 + t2;
When this option is specified, the compiler applies the following semantics:
Additions may be performed in any order
Intermediate expressions may use single, double, or extended double precision
The constant addition may be pre-computed, assuming the default rounding mode
Using these semantics, some possible ways the compiler may interpret the original code are given below:
float t0, t1, t2;
...
t0 = (float)((double)t1 + (double)t2) + 4.1f;
float t0, t1, t2;
...
t0 = (t1 + t2) + 4.1f;
float t0, t1, t2;
...
t0 = (t1 + 4.1f) + t2;
This setting is equivalent to -fp-model precise on Linux* operating systems based on the IA-32 architecture and -fp-model precise or /fp:precise on systems based on the IA-64 architecture.
Example source code:
float t0, t1, t2;
...
t0 = 4.0f + 0.1f + t1 + t2;
When this option is specified, the compiler applies the following semantics:
Additions are performed in program order
Intermediate expressions use extended double precision
The constant addition may be pre-computed, assuming the default rounding mode
Using these semantics, a possible way the compiler may interpret the original code is shown below:
float t0, t1, t2;
...
t0 = (float)(((long double)4.1 + (long double)t1) + (long double)t2);
This setting is equivalent to -fp-model precise or /fp:precise on systems based on the Intel® 64 architecture.
Example source code:
float t0, t1, t2;
...
t0 = 4.0f + 0.1f + t1 + t2;
When this option is specified, the compiler applies the following semantics:
Additions are performed in program order
Intermediate expressions use the precision specified in the source code, that is, single-precision
The constant addition may be pre-computed, assuming the default rounding mode
Using these semantics, a possible way the compiler may interpret the original code is shown below:
float t0, t1, t2;
...
t0 = ((4.1f + t1) + t2);
This setting is equivalent to -fp-model precise or /fp:precise on Windows systems based on the IA-32 architecture.
Example source code:
float t0, t1, t2;
...
t0 = 4.0f + 0.1f + t1 + t2;
When this option is specified, the compiler applies the following semantics:
Additions are performed in program order
Intermediate expressions use double precision
The constant addition may be pre-computed, assuming the default rounding mode
Using these semantics, a possible way the compiler may interpret the original code is shown below:
float t0, t1, t2;
...
t0 = (float)(((double)4.1 + (double)t1) + (double)t
Example source code:
float t0, t1, t2;
...
t0 = 4.0f + 0.1f + t1 + t2;
When this option is specified, the compiler applies the following semantics:
Additions are performed in program order
Expression evaluation matches expression evaluation under keyword precise.
The constant addition is not pre-computed because there is no way to tell what rounding mode will be active when the program runs.
Using these semantics, a possible way the compiler may interpret the original code is shown below:
float t0, t1, t2;
...
t0 = (float)((((long double)4.0f + (long double)0.1f) + (long double)t1) + (long double)t2);