Operating System - HP-UX
1834803 Members
2831 Online
110070 Solutions
New Discussion

round float to N decimal places

 
Mario Bergotto
Occasional Contributor

round float to N decimal places

Has anyone found out the way to truncate a float (single or double, no matter) to a fixed number of decimal places??
For example... this doesn't work:

float x=3.1234;
float y;
int aux;
y=x*100.0;
aux=(int) y;
y= ((float)aux)/100.0;
printf("%f\n",y);

After this, printf shows 3.12, but if you use any debugger, it'll show 3.119999... and so on...
any ideas??
Regards,
Mario
5 REPLIES 5
A. Clay Stephenson
Acclaimed Contributor

Re: round float to N decimal places

The key to what you are trying to do is the floor() function. Typically you add a very small bias value (e.g. 1.0e-5) is rounding to zero places and then divide that bias by 10 for each additional decimal place. You then send the sum of your original value and the bias to the floor function.
If it ain't broke, I can fix that.
A. Clay Stephenson
Acclaimed Contributor

Re: round float to N decimal places

Here is another method that leverages sprintf and atof to do the rounding but essentially uses the same methos that I outlined before.

#include

#ifndef FALSE
#define FALSE (0)
#endif

#define X_BIAS0 1.0e-5


double dblround(double x, int places)
{
if (places >= 0 && x != 0.0)
{
int i = 0,is_neg = FALSE;
double x_bias = X_BIAS0;
char sx[320];

for (i = 1; i <= places; ++i) x_bias /= 10.0;
is_neg = (x < 0.0);
if (is_neg) x = -(x);
x += x_bias;
(void) sprintf(sx,"%.*f",places,x);
x = atof(sx);
if (is_neg) x = -(x);
if (x == 0.0) x = 0.0; /* not as stupid as it seems; when x == 0.0
x = -x causes printf to print -0.0 */
}
return(x);
} /* dblround */

int main()
{
double x1 = 5.0499999,y1 = 0.0;

y1 = dblround(x1,2);
(void) printf("%.4lf\n",y1);
}


If it ain't broke, I can fix that.
Mario Bergotto
Occasional Contributor

Re: round float to N decimal places

Didn't work... it works just fine if you do printf, but on gdb you get something like my original post...
I've heard some folks talking about using BCD here... Personally I'd hate to use it...
A. Clay Stephenson
Acclaimed Contributor

Re: round float to N decimal places

Floor() is as close as you are going to get. Also note that gdb or any other debugger is essentially no more valid for displaying floating point than printf(). They are simply using higher precision. The fundamental problem is that there are some values that can't be exactly converting from the base 2 logarithms to decimal values. I suspect that you are running into problems when comparing for equality. Never do this with floating point. The best you can do is compare the absolute value of the difference btween two values to a very small number.
e.g.

if (fabs(x - y) <= 1.0e-6)
{
printf("X and Y are equal\n");
}

If you need exact representation then by definition, you can't use floating point.

One option to explore is long long's (64-bit integers) which will probably give you sufficient range, exact representation, and the standard operators *, /, %, and == work without haviung to do everything as functions --- which a BCD approach would require.
If it ain't broke, I can fix that.
Mike Stroyan
Honored Contributor

Re: round float to N decimal places

The float and double representations cannot have exact values equal to most decimal values. They represent numbers as totals of values such as 2^2, 2^1, 2^0, 2^-1, 2^-2, and so on. Dividing by 100, or any other power of 10, requires an infinite series of smaller and smaller powers of two to represent a value exactly. There are several possible workarounds.

Typically folks just use float values that are close enough, then format final results as strings with the desired precision. That usually works fine.

As you noted, BCD or Binary Coded Decimal, represents decimal fractions exactly. It actually uses three bits for each decimal position. All math on BCD numbers must be done with slow software algorithms.

If you are only representing fairly small numbers and doing simple math on them, then you could use a variant of 'fixed precision'. Just keep all your numbers scaled by 100 and then use a shifted decimal point when you format them as strings. Of course simple operations like multiplies and divides would require an adjusting scaling by 100. I can't think of a sound reason to insist on an exact number of decimal places if you start getting into any computation that would involve loss of precision. If you do anything like a divide by three, then more precision is a good thing compared to 'exact' truncation at a short number of decimal places.