C code noob's corner
I am (trying???) to learn C as part of my studies. Things are going fairly ok in programming but extending one task is giving me some grey hairs.
This code will output the size in bytes for various types of variables, as well as the value range. However, things are tricky when it comes to the float double both signed and unsigned, long and short, if it's there, I want to include it in the program. Code:
#include <stdio.h> The float header was included as I messed around with different float constants not getting anywhere. Any input, so to speak, would be greatly appreciated! |
The short answer is that, if you haven't found those constants in the same place as the rest, they probably don't exist. Although I'm not sure, and there's always a way to find those values out. Such constants aren't required by the C specifications; whether they're defined or not depends on the implementation (compiler) that you're using.
In C, not even the bit sizes and ranges of data types are defined uniquely. For example an int will be 32 bits long in a 32-bit environment, and 64 bits long in a 64-bit one. Most more recent languages do specify fixed sizes and ranges for all data types, but C is different in this regard. PS: there's a sub board for this: http://www.abandonia.com/vbullet/forumdisplay.php?f=25 |
Actually, float.h contains the constants FLT_MAX, FLT_MIN, DBL_MAX and DBL_MIN, which are the largest and smallest numbers you can represent in those types. These values are ANSI, and must exist in any standard C compiler. It seems, though, that these values are just the minimum/maximum values that the compiler must accept (i.e. a float could have numbers larger than FLT_MAX, but it must be able to represent at least numbers up to FLT_MAX).
|
Thanks John and sorry MM if I was misleading. I'm not familiar with the different C standards, but I know that data types in C are dependent on the platform (bitness), although the compiler takes care of parsing the correct #define's depending on the environment.
Look what I found: http://en.wikipedia.org/wiki/C_data_...he_basic_types |
Hello Japo. You are correct, C data types may have different sizes in different implementations, which can be annoying, but the ANSI standard defines a minimum range the types must represent. Take a look here:
http://www.acm.uiuc.edu/webmonkeys/b...guide/2.4.html Which was actually a good idea from the guys who created the standard, otherwise portability would be a pain. |
Thanks I didn't know there was a minimum irrespective of platform. By "the standard" I suppose you mean C89?
Anyway MM keep the questions coming, I hope you stick to learning! BTW to be real standard C you should define "int main(void)", not "int main()" |
That's a good question. I originally learned about the types restrictions as being ANSI, but I don't know which of the standards issued it.
|
Thanks tons to both of you! And don't worry, Japo, I will keep asking :D I will need all the inputs I can get. This school isn't wasting time or waiting for stragglers!
|
Quote:
As per the IEEE-754 specification, all floating point numbers are signed. See http://en.wikipedia.org/wiki/IEEE_floating_point for more details. :) And thank your deity of choice that you don't have to learn that in order to implement floating point operations in assembly... completely on CPU. :p Also, you might be missing wchar_t but I don't remember for sure if it's a C or C++ thing. Quote:
|
Quote:
http://msdn.microsoft.com/en-us/library/aa383751.aspx I think wchar_t is C++ (never used C++ a lot) but I don't think it's C, unless it was introduced in C99, or C11 (most popular compilers don't follow even C99). |
The current time is 08:29 AM (GMT) |
Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.