Does anybody know what cout.precision(number) does in c?

amazingtrade

Mad Madchestoh fan
Joined
Jun 19, 2003
Messages
5,139
Reaction score
0
Location
Manchester
Sorry to keep posting questions here. I have a really annoying problem where my lecture notes say:

cout.precision(2)

will output the result to two decimal places. I.e if the user types in 55.439 it will round it up 55.44

However when I do this it just displays the two most significient figures and not the decimal places. Does anybody know how you get it to display two decimal places?

The notes were written for C++ .NET and I am suing Bloodshed C++ but they are both ANSI complient so the language should be the same.

I am so confused and frustrated, I've tried looking in books and on the net but have found nothing.

Thanks for any help.
 
You also have to tell the stream your using a fixed precision otherwise setprecision only determines how many significant digits to show. The following worked for me:

Code:
#include <iostream.h>
#include <iomanip.h>

int main(int argc, char* argv[])
{
    cout << setiosflags(ios::fixed) << setprecision(2) << 55.439 << endl;
    return 0;
}

All of this is available on msdn.microsoft.com btw ;)


Michael.
 
Thanks, I will use the MDSN site in future, I shall be having words with my lecturer tomorrow I am not amused trying to spend a few hours trying to figure that one out. It works now though thanks.:)
 
Originally posted by Isaac Sibson
Learn a proper language!

Like Assembler
6502 assembler rules :MILD: Haven't done assembler for ages though. Have enough x86 assembler to help me debug tricky issues but I hardly ever need to go there anymore.

Still, I reckon that all people learning to program should learn some kind of assembler just because of the understanding of "how it works" that it gives you.

Michael.
 
This is potentially confusing, the best thing to do is experiment.

On a minor point michaelab was using the old style iostreams, not the standard ones, so the behaviour is almost inevitably different.

This code,
Code:
#include <iostream>
using std::cout ;

int main()
{
   cout << 57.98765432 << std::endl ;
   cout << 0.5798765432 << std::endl ;
   cout.precision ( 2 ) ;
   cout << 57.98765432 << std::endl ;
   cout << 0.5798765432 << std::endl ;
   cout.precision ( 4 ) ;
   cout << 57.98765432 << std::endl ;
   cout << 0.579876543 << std::endl ;

   return 0;
}
produces this output,
Code:
57.9877
0.579877
58
0.58
57.99
0.5799

You should investigate the iomanip things 'std::scientific' , 'std::showpoint' andstd::fixed. Ask if you want an example.

Paul
 
Paul - your results show that setprecision alone merely sets the number of significant digits, not the number of decimal places. However, if you're in "fixed" mode (fixed number of decimal places) then it determines number of decimal places.

Calling
Code:
cout.setf(ios::fixed)
before using the cout stream would have the same effect as using
Code:
setiosflags(ios::fixed)
in the stream like I did in my example.

AT - the iostream libarary (at least the original one that I know) is pretty impenetrable (a case of theory winning out over practise and common sense IMO) and usually pretty poorly documented aswell.

See http://www.cplusplus.com/ref/iostream/index.html for extra docs on the classes.

Paul - It's been so long since I've had to code C++ in anger on a new project that I haven't learned the latest standard library stuff so apologies for not being up there on the bleeding edge :)

Michael.
 
Originally posted by michaelab
Still, I reckon that all people learning to program should learn some kind of assembler just because of the understanding of "how it works" that it gives you.

Absolutely. My comment was based on that my programming interests are more likely programming simple embedded microprocessors (eg PIC) which you will program in the appropriate assembler (well, if you want any semblence of efficiency and reliability) or programming PLAs, PLDs, FPGAs or laying out ASICs which is obviously done in hardware description languages (VHDL, Verilog, ABEL, etc)... And, of course, no small part facetiousness... :p

But the point you make is very valid. I also think that a good understanding of microprocessor architectures (and architectural philosphies) is very important too, in that understanding how the architecture of the processor you are writing for affects the code execution therefore how best to write it allows you to make the most efficient code (although this is obviously more relevant to someone writing a compiler than someone using that compiler).

As it is, I will freely admit that I just don't really get on with high level languages, but I am very happy indeed to generate some really rather complex code structures in assembler or machine code, simply because that is the architectural level I think at.... which is why I'm an electronic engineer rather than a computer scientist.
 
OK I'm a little confused now, Micheal's code seems to work fine, I am going to spend a while double checking but I can't see any problem in it within my code. The breif was to creat a caclulator with error checking and loops which displays to 2 decimal places. It does that so I guess for this level its fine.

As for assembler well I don't know about computer science courses but as I am doing Multimedia and Internet Technology I think the main reason we are doing C++ is to learn the theory behind the softer langauges such as VB etc which is more likely to be used for multimedia applications.

Added:

I must say I have really enjoyed this programming module though, I never used to have much confidence in programming but since doing this C stuff its really increased my confidence.

I think I am one of these people that need to be taught rather than being able to teach myself as I tried learning c before and never even got as far as simple IF statements etc.

My next assignment is a 0's and X's game which AI logic so that will be pretty hard to code, its a challange though and I'm up for it.

I never thought I would here myself say I would enjoy programming. I've written loads of programs in my spare time in the past but its the final product I enjoyed making not the code.

I would have loved to have been young enough to get into 6502 assember with my C64 but I was 11 when I upgraded to a PC and was too young to learn it. I started on GWBASIC as soon as I got my PC. I wrote my first simple programs when I was 9 years old but then when I was around 15/16 lost interest in completly until now.
 
Last edited by a moderator:
your results show that setprecision alone merely sets the number of significant digits
A float is naturally something like 0.5798765432e2, precision sets the precision of the mantissa. When you display it in 'decimal' format you get the results shown. Which makes some sort of sense.

You can then force different presentations with fixed and decimal flags.

Doing a calculator properly is quite complicated and involves parsing expressions. It's the 'Hello World' of parser generators...

I'm probably the only one who remembers the fun to be had finding rounding bugs in the Windows calculator....

Paul
 
I know next year we have to produce a calculator using in c++ using the windows API. We will have to do all the precisions and stuff and precidents.
 
Originally posted by Paul Ranson
I'm probably the only one who remembers the fun to be had finding rounding bugs in the Windows calculator....
You're not :D Agree about writing a calculator - much trickier than meets the eye.

Michael.
 
Back
Top