Sunteți pe pagina 1din 12

Describe the difference between the #include <file> and the #include file regarding file

inclusion in c computer programming language.


The above are the two ways of including a file into your program while programming in C
programming language, the first method, i.e. the one including the angle brackets with the word
file is used to include a file which exists in some predefined default location and as a result, the
preprocessor fetches for it from there. Look at the following example:
INCLUDE = C:EDWINWILLINCLUDE; H:HEADERSFORTHESOURCE;
Given such a situation, and with the use of #include <file>, the preprocessor will first have to
check in the directory C:EDWINWILLINCLUDE for the specified file, in event that the
specified file has not been found, then the preprocessor automatically heads to the directory
H:HEADERSFORTHESOURCE and in event that it still fails to find the stipulated file, it then
moves to the current directory to continue checking.
When the programmer just wants the compiler to check for the file right in the current directory
first before proceeding to the predefined location which he has set up, he may use the method
involving the inclusion of the file with the question marks.
With the use of #include file and considering the above example, the following will take place,
the preprocessor first checks the current directory regarding the requested file and thereafter, I
mean in an event of failure to find the requested file, the processor will then have to look into the
directory C:EDWINWILLINCLUDE and event that it still fails to find the stipulated file, it will
then look into the directory H:HEADERSFORTHESOURCE.
The standard headers such as the stdio.h or the stdlib.h which are rarely, if ever, modified are
included using the #include <file> method and it is advisable that you let them be read from the
standard include directory of the given compiler.
The method The #include file is used to include the nonstandard, also called the user defined
files as these files are frequently modified and the programmer would always prefer that the
preprocessor checks for the latest modification which usually do exist in the current directory as
opposed to going for the unmodified ones in the standard include directory.
What is the difference between an array and a pointer?
Pointers are data types whose variables are addresses here, referencing and dereferencing
operations are done with the help of the operators & and * respectively whereas the array is a
continuous memory location consisting of data of the same type and the subscripted variables are
used to access and manipulate data in the arrays, the array can as well be implemented using
pointer expressions.
What is the advantage of using an enum datatype as compared to the #define constant?
There are so many advantages of using the enum constant which is the enumeration constant,
these include but are not limited to the ease of maintenance and improvement regarding the
program readability.
Some of the advantages are:

1) Compilers automatically generate the enumerated constants (enum constants) as opposed


to the symbolic constant which the programmer manually assigns values to, take a look at
the following example, one may have an enumeration constant which relates to the error
codes for errors which may occur in a given program. The enum definition will look like
the following
Enum ErrorCode
{
OutOfMemory,
InsufficientDiskSpace,
LogicError,
FileNotFound,
};
Here the error OutOfMemory is assigned the value zero or 0 automatically by the
compiler basing on the fact that it is the first variable in the definition, the automatic
assignment is continued in order and in an incremental order depending on the variables
which follows such that in this case, we end up having InsufficientDiskSpace assigned to
value 1, LogicError, assigned to value 2 and the FileNotFound, assigned to value 3.
If one had to go for the #define constant, the following would be the scenario:
#define OutOfMemory 0
#define InsufficientDiskSpace 1
#define logicError 2
#define FileNotFound 3
And thus here the values are assigned by the programmer. Either way we have achieved
the same purpose but look at the efforts that will go into the maintenance of the second
procedure, what if there is a need to add two more constants which will represent the
error codes for the errors DriveNotReady and CorruptFile, with the enumeration
constants method, one will only need to put these constants somewhere in the definition
of the enum and he leaves it to the compiler to generate the two unique values to be
adopted by these constants but when you use the symbolic constant method, you will
have to manually assign two other new numbers into these constants and also you would
want to be sure that the values you are assigning are unique which might be extremely
tedious if not inefficient and error some.
2) With enumeration constant, the program become more readable thus it is easier to maintain
and share the code with varied developers in the team members.
3) The other advantage is that with the enumeration constant is that some of the symbolic
debuggers are in a position to print the enumeration constant values whereas most of
these symbolic debuggers can not print the values of the symbolic constants, this implies
that if one employs the enumeration constants then the debugging task is far much easily
reduced in that one simply inspects that constant and knows its value instantly but for the
#define values which cant be printed by most debuggers, one would have limited
alternative and the most common one would be that he will have to look for the given

value himself (manually) one by one until you get even to the header files, imagine how
tedious this task can be, leave alone the time involved in doing the stuff.
Differentiate between the terms far and near as far as c programming language is concerned.
These terms regards to pointers, it is notable that some PC compilers use more than one type of a
pointer, two to be specific. To be precise, the near pointers are the pointers the ones which are
16-bit long and are capable of addressing an address of 64KB in range whereas the far pointers
are the pointers which are 32-bit long and are capable of addressing up to 1 MB in range.
The near pointers only operate within the 64 KB segment. There exist a single segment for
function address as well as another one for the data while the far pointers have got a 16-bit base
which is the segment address as well as the 16-bit offset, this same base is multiplied by 16
implying that the pointer is then 20 bits long. Before one compiles his code, it is necessary that
he tells the compiler the memory model to put in use. Near pointers are used by default if one
uses a small code memory model for the function address.
This implies that all the functions must meet in a single 64KB segment, when one puts a large
code into use, the far function address is used by default thus it is normal to find the near pointer
models having small data models and far pointer models have large data models, these normally
occurs as defaults and one is free to declare variables and functions explicitly as either near or
far.
It is worth noting that even though far pointers do a longer range, they are relatively slower as
compared to the near pointers and every time a programmer uses the far pointer, the code or the
data segment register is usually swapped out to create space. These far pointers as well have odd
semantics for arithmetic and logic. For instance, we may have two far pointers pointing to the
same address, but they will compare differently thus if your program fits in a small data thus a
small data code then you have an advantage.
Name the quickest sorting method one can do.
I think here usually the need for speed takes no priority as there exist so many factors to consider
and thus there exist no one case all answers in this scenario. These factors may involve the likely
order, size of data as well as nature of data thus no algorithm cuts across all the solutions as the
best, for the more conventional sorting takes relatively short time such that one should have no
need to worry about time and finally, this operation takes place in a seldom manner and thus it
helps no nobody to keep worrying about how long it would take.
All said and done, here we will talk about three sorting mechanisms all of which are very fast
and very relevant to particular situations, these are the quick sort, radix sort and the merge sort.
1) Quick sort
This takes the divide and conquers philosophy thus reducing the sort is its mode of
working. It reduces the given sort into several simpler sorting problems and solves each

of them independently. The value to be used as the dividing value is usually chosen from
the input data and the given data is then divided into three segments namely the elements
which come before the dividing value, the dividing value itself and the elements that
come after the dividing value. The partitioning is done by doing an exchange such that
elements in the first set but belong to the third are taken there and those in the third but
belong to the first are returned to the first, elements equal to the dividing element are put
in any segment and the algorithm still runs properly.
2) Merge sort
Merge sort is also a divide and conquer sort, this one works by considering the data to be
sorted as if they are a sequence of a given already sorted list whereby in the worst case,
each list has got the length of one element. The adjacent but sorted lists are merged into
the larger sorted lists. This is performed recursively until there is an achievement of a
single list containing all the elements. This type of sorting is good for sorting lists as well
other data structures which are not in array form and can be used to sort items which do
not fit into the memory as well. It is also in a position to be implemented as a stable sort.
3) Radix sort
This is the type of sort which takes a list of integers and then puts each element in a list
smaller than the original, this is done based on the value its LSB (Least Significant Byte).
The small lists are there after concatenated and the process recurs for each and every more LSB
until when the entire list is sorted. This sort (the radix) is simpler when being implemented on
the fixed length data such as the integers.
State the circumstances under which a far pointer should be used.
It is possible that one gets away with using a small memory model in most programs though it
may occur that some few things just do not fit into your segment for the small data and code, in
such a situation it is recommended that one seeks refuge to the explicit far pointers as well as the
function declarations so as he is in a position to get at the rest of the memory. This far function
can be placed outside the 64KB segment and it is notable that most functions are usually
shoehorned for the small code model. Many libraries are usually declared with the so called the
explicitly far this means that they will work disregard the code model used in the program.
The far pointer is in a position to refer to information which is outside the data segment which is
64KB. Under normal conditions, these pointers work with the farmalloc() and other stuff of this
type to manage a certain heap of certain heap which is separate from all the rest of data lives. It
is advisable that when one uses the small data large code model, he makes his function pointers
far.
Describe the meaning of the term hashing in C programming language.
Hashing means grinding up and is barely what constitutes the meaning of the word hashing. The
core of the hash algorithm is the hash function; this function is the one responsible for taking
ones nice and neat data and grinding it into some integer which looks very random.

This assists in working with data which have got no inherent ordering such as the images or even
data which are relatively expensive to compare, which still falls under the bracket of images.
Under normal circumstances, one can not perform comparison searches on data which have got
no inherent ordering.
For data which are expensive to compare, one will always find that the number of comparisons
that will be used will be many even in the cases where the binary search are employed. Owing to
these facts, it is recommended that rather than looking for the data themselves, one simply
condenses or hashes the data into an integer known as its hash value and keep all the data having
same hash values in the same place, this is the task which is usually performed with the help of
hash value to be the index into a given array.
Searching a particular item is done by hashing it and then looking at all the data whose value
correspond to that of the data you are interested in. this procedure reduces the number the of data
that you need to look into. The number of comparison can be made to be close to one if the
parameters have been properly set and there exist sufficient storage space.
Efficiency in hashing is affected by several aspects, including, but not limited to the hashing
function itself. This function is charged with the responsibility distributing the data randomly
throughout the entire hashing table in order to reduce the possibility or the likelihood that
collision may occur. The collision under normal circumstances occurs when two or more
different keys have the same hash value, such a problem can be solved in two ways, in the case
of the open addressing scenario, such a situation is solved by selecting another separate position
in the said hash table so as this position serves the newly inserted element, in event that one
searches the hash table and fails to find the entry of interest at its stipulated location, the search
will automatically carry on until the entry is found or an empty location is found in the table,
another way to solve this situation is via the method referred to as chaining, here, a linked list or
a bucked is used to hold all the elements hashing same hash value.
Under this circumstance, when one searches the hash table, this linked list will have to be
searched linearly.
Describe how one can determine the size of the portion of a memory which he has allocated in
c programming.
There is this malloc/free implementation, this one remembers the size of each block the moment
it is allocated thus it is important that one reminds it of the size during the times when he is
freeing, under normal conditions, this size is usually stored adjacent to the block which has been
allocated and is usually the cause as to why things break badly in an event that one slightly
oversteps the bounds of the allocated blocks.
Which one of the following functions should be frequently employed the calloc() or the
malloc()?
Both of these functions are used to in the allocation of the dynamic memory, each one of them
operates different from the other in that the malloc() function takes in a size and returns a pointer

to the chunk of memory of an equivalent size. The statement is as follows: void*malloc( size_t
size);
Calloc() works by taking a number of elements as well as the sizes of each stipulated element
and then returns a pointer to the chunk of memory which is big enough to hold all of them, the
following is the statement
void *calloc( size_t numElements, size_t sizeOfElement );
Usually there exists one major and a minor difference between these two functions. The major
difference being that malloc() does not initialize the allocated memory. At first , the malloc() may
give you a particular chunk of memory full of zeros and in event that the memory had earlier on
been freed and reallocated then there exist a probability that one will end up having junks of
whichever forms which were there and thus have been left in and this causes the program to be
able to run in a simple case, i.e in the situation where there is no memory allocation and then
break when expected to do more, or is used more as well as when there is memory reuse.
The calloc() function on the othere hand fills the allocated memory, usually with all zero bits
implying that anything that will be used there is guaranteed to be zero, be it an integer a char or
an integer pf any length, whether signed or unsigned, even the thing that one expects to use as
pointers will be assigned to zero bits which implies a null pointer but for this case there is no
guarantee, again each and everything that one expects to use as the doubles or the float will all be
set to zero bits hence a floating point zero on some types of machines and not all.
The minor difference is that much as the mallac() returns only one object, calloc returns an array
of objects thus it is common to find people using the calloc() to clarify that they are interested in
an array.
State whether or not there exist a possibility to execute a code after the program has exited the
main() function.
With the exit() function available in the standard C library, one can perform cleanup operations
even after the program has terminated. One can set up the function he wants to execute
automatically after the program exits. This is done by passing the function to pointers to the
exit() function.
Is it possible for one to declare static variables in the header file?
Yes, it is possible for one to declare static variables in the header files as long as their definitions
have been provided in the same header files though this means that you are rendering the static
variable a private copy of the stipulated variable meaning it is no longer global and thus it can
not be used in any other place, under normal circumstances, this is never the intention of any
given header file, thus the use of the static variables in header is not a recommended idea.
What do you understand by the word heap in C programming language?
This is the place which provides the memory for the calloc(), Malloc() and the realloc()
functions. It worth noting that getting memory from a stack is usually much faster than getting
the memory from a heap yet the heap is more flexible as compared to the stack as it has been
shown that one can allocate memory at any time and deallocate the memory in any order. These
memory arent automatically deallocated under normal circumstances and one has to call the
free() function.

It is recognizable that all the recursive data structure are implemented using the memory from
the heaps in almost all the occasions and this includes even the strings and in particular the
strings which tend to be very long during runtime. Usually it advisable that one keeps his data in
a local variable and then allocate it from a stack as this makes the code execute faster than the
one who uses a heap but there exists a tradeoff as one will have to get a better algorithm which
might be even faster, robust and flexible thus one has to look into the prevailing situation so that
he comes up with an optimal decision.
Memory that has been allocated in a heap is usually available till the program ends and this is an
important notion as long as one would be able to deallocate it once he is done with it else if you
fail to remember this, then you are in a problem, it results into the so called memory leak which
is the allocated memory which is needed no more but is yet to be deallocated, if such a
phenomenon happens to appear in a in a loop, then one will find himself using all the memory in
a heap and getting no more and in such event, a null pointer will be returned by the allocation
function. Under some circumstances, when a program fails to deallocate all that has been
allocated, the memory will not be available even after the end of the program.
Show me how you would open a file so that it can be updated by other programs too at the
same time.
The C compiler has got a low level file function reffered as the sopen() which assists in opening
a file in the shared mode, starting from the DOS 3.0, one could open a file in shared mode by
loadinga certain program known as the SHARE.EXE, It is self explanatory that the term shared
mode implies that a particular file in shared mode is being shared, in this case not by users but by
other programs too apart from your program.
With the help of this function, one allows other running programs too to update the same file that
he is updating. This function takes four parameters, namely
1) The pointer which points to the filename one is intending to open, the mode of operation
on which you want to open the given file, which file sharing mode to be adopted and
usually if one is creating a file then the file creation mode
2) The parameter number two is the one known as the operation flag parameter and the
following values can be assigned to it:
The constant description O_APPEND this helps to guide so that all writes are appended
to the end of the file.
The O_BINARY Here the file is opened in binary mode, also known as the binary mode
The O_CREAT this one checks if the file exists and if not creates it.
The O_EXCL for instances where O_CREAT flag has been used and the file is in
existence, an error is returned.
The O_RDONLY this one opens the said file in the read-only mode
The O_RDWR this opens the file for both the reading and writing operations
The O_TEXT this one opens the stipulated file in the translated mode, also known as the
text mode.

The O_TRUNC this one opens the file which is in existence and overwrites the contents
of the said file
The O_WRONLY this one opens the stipulated file in the write-only mode
3) Parameter three of the sopen() function is ?sharing flag? And the following attributes or
values can be assigned to it:
The constant description
The SH_COMPAT this one assists to stipulate that NO other program accesses the file
The SH_DENYRW this one assists in stipulating that no other program reads or writes
from the stipulated file.
The SH_DENYWR this one assists in stipulating that no other program can write on the
file.
The SH_DENYRD this one assists in stipulating that no other program can read from the
file.
The SH_DENYNO this one assists in stipulating that any program can read and write
from the file.
Incase the sopen() function functions successfully, the files handle which is a nonnegative number is returned, else in event of an error, the valr?1 is returned then the
global variable error number (errno) is set to one of the values below.
The constant description
The ENOENT which stipulates that the file or the path has not been found
The EMFILE this one stipulates that there exist no more available file handles
The EACCES this one stipulates that the permission has been denied to the access the
stipulated file
The EINVACC this one stipulates that the access code is invalid.
Give the difference between the terns NULL and NUL.
The term null refers to a macro which is usually defined in the <stddef.h>, usually for the null
pointer. Where as the term NUL is used to describe the first character in the American Standard
Code for Information Interchange, the ASCII, it is the equivalent of the zero (0) value, We do not
have the standard macro NUL in C though some people have a high tendency to defining it. The
digit 0 usually corresponds to a value of 80 in decimal and it is advised that you take sufficient
care not to confuse this digit zero with (NUL) value at any cost.
The NULL can be defined in the following manner: ((void*)0), NUL as ??.

Is it possible for one to execute the printf statement without the use of a semicolon (;)?
No this is not possible as the printf() is one of the inbuilt functions and these are called using
statements and there is no way you can leave a calling statement without a semicolon and expect
your code to execute normally.
State the situation that calls for the use of a volatile modifier.
This is a directive meant for the compilers optimizer which limits regarding how operations
regarding particular variables can or cannot be optimized. Two cases arise where one is expected

to use the volatile modifiers whereby the first one will involve the memory mapped hardware,
this may involve devices such as the graphic adaptors which appear to the computer systems
hardware as if they were part of the stipulated computer systems memory, the second one is the
shared memory which is the memory that is two or more programs at the same time in a given
computer system. It is observable that in most computer systems, there exist a set of registers
which are accessible faster than the main memory of the given computer system, i.e. the RAM or
the Random Access Memory.
Usually good compilers should be able to perform the kind of optimization referred to as the
redundant load and store removal, here the said compiler will look at the code in order to find
places where it is capable of removing a particular instruction to load data, usually from the
memory since the value is in the register already or to remove an instruction so as to store data in
the memory since the value is in a position to stay in the register until when it is changed again
disregard, in a situation where the variable involves a pointer pointing to anything other than the
computers normal memory, for instance, the memory mapped on ports situated on peripherals
such as the network adaptor cards, the redundant load and store optimization might fail to
perform anything constructive and instead be detrimental the following is the piece of code (code
segment or extract) that can assist in illustrating what we are discussing here.
time_t time_addition(volatile const struct timer *t, int a)
{
int m;
int y;
time_t then;
y = 0;
then = t->value;
for (m = 0; m < 1000; m++)
{
y = y + a;
}
return t->value - then;
}
In our code above, the variable t-> is the hardware counter which is incrementing as time passes.
The given function is responsible for addition of a to y one thousand times and then returns the
amount by which the timer was incremented during the performance of the one thousand
additions. In the absence of a volatile modifier, It might be assumed by a clever optimizer that
the value of t remains constant during the time when the function is under execution as there is
no observable statement that causes its change and thus there arises no need to read from the
given memory for the second time and as a result subtract it as it is clear that here the answer will
always be zero (0). The said compiler might thus decide to optimize the function such that we
end up getting a value zero (0) returned from it always.
In an event the variable is pointing to a data in a particular shared memory, it is still not advisable
for the compiler to perform these redundant load and store operations as these shared are meant
to allow for two programs to communicate with one another in a situation whereby on program

writes or stores data in a particular portion of the shared memory and the other programs which
the said program is sharing the memory with can then read the information from that memory, in
event that the compiler optimizes away the load or store of the shared memory, then the
stipulated communication will be interfered with.
What do you understand by the term static function?
This is a type of a function having its scope limited only to the current source file, whereby the
word scope is used to imply its visibility i.e. the functions visibility or even the visibility of any
given variable. Global functions or variables are the functions or the variables visible outside the
current source file and the reverse is the local, be it a function or a variable thus the term static
function implies the same meaning as the term local function in this context.
Give the reason why it is important that a programmer prototypes a function.
It is the function prototype that tells the compiler the data types of the arguments the given
function is anticipating and also the datatype the given function should return. This assists the
compiler to detect errors in event that functions have not been properly called and to prevent
erroneous type conversions from taking place.
What is the equivalent of the following expression x%8?
The equivalent of the above expression is the x&7.
Can we have a variable which is both const as well as volatile?
It is very possible usually the const modifier protects the code from changing the value off the
given variable but it has nothing to do with the fact that the value can no loner be changed by
other means outside the code. Look at the following example for a clarification. Look in to a
situation whereby a timer structure is accessed via a volatile const pointer, here, the function
itself is not responsible for changing the timer value and as a result it was declared a constant but
the hardware in the computer system changed the value thus making it to be declared a volatile
one, in a situation whereby the variable is both volatile and const, both of these modifiers can
appear in any order.
Explain how you would override a defined function.
This would be done by using the following preprocessor directive - #undef,
To undifine or override the macro that was defined previously.
Show how you would print an address.
An address can be printed in several ways, the safest being the one involving the use of printf()
or the sprintf() or the fprintf() together with the %P specification.
Here we get a void pointer (void*) printed. It is important to note that different compiler sprints
pointers with different formats and your compiler is responsible for choosing the formart that
best suits your situation. You can improve your safety if at all you have other kinds of pointers
which are not void* by casting the pointers to void check the statement below for reference:
printf( ?%Pn?, (void*) buffer );

Describe the following:


1) Null pointer assignment error
2) Bus errors
3) Core dumps
Above are serious errors or symptoms which indicate that there is a wild pointer or subscript
around.
The Null pointer assignment error message can be found after an MS-DOS program has finished
executing. Some of these programs organize for some small amount of memory to be available
so that the null pointers can point to, in an event that the program makes an attempt to write to
that area, the program will find itself overwriting the data which the compiler has put there. Once
the program is through with is activity, the area is examined by the compiler generated code and
in an event that the data is missing then the code gives the null pointer assignment message, this
message carries no more information other than the information to get the programmer worried,
in the null pointer assignment message, one cannot tell which part of his program is responsible
for the message or the error, it is upon the programmer to seek refuge to some debuggers or even
compilers in order to get more information on how to treat the prevailing error
Bus Error: The core dumped and the memory fault: These are messages that one can get while
running programs in a UNIX system. These messages are generally more programmer friendly,
they both tells that a certain pointer or a certain array subscript was out of bounds widely and
these messages can be gotten either on a read or a write memory and they have got no
restrictions regarding the null pointer problems
The core dump is the part of the message that will tell you about the file called core which has
been written in the programmers current directory. Under normal circumstances, this will be
found to be dump of each and everything on the stack as well as in the heap during the time that
the given program is under execution, debuggers can be relied upon to offer help regarding how
to locate the place where the bad pointer has been used. This will still not give you the reason as
to why the given pointer is considered bad but at least it a step in the right direction, and
furthermore these messages can only reach you if you have the write permissions in the current
directory, else no.
Explain the purpose of the main() function in C Programming.
The purpose of the main function in C programming is to invoke other functions within it,
usually, the main function is the first function to be called immediately the program begins to
execute and thus the function which starts the given every c program.
Since this is not a void function and is usually declared int, i.e. (int main) this function returns an
integer value to the environment that called it and the to return control to the operating system,
the main() function is usually put to return 0.
The main function allows for recursive calls for as long as the programmer is careful with the
basic C rules and is willing to pay attention to details and even refer to C programming manuals
when the details are on at his fingertips, i.e. patience and persevering and willing to realize and
adopt change.
This main function is usually a user defined function and the given program ends immediately
the closing brace of the main() function is reached, this program consists of two arguments
namely the argument count as well as the argument vector which represents the strings which

have been passed. The main() function also allows any user-defined name to be used as a
parameter within it rather than the argv as well as the argc.
What do you understand by the word pragma?
This is a preprocessor directive that allows each and every compiler to implement its own
compiler specific features that one can turn on or off with the help of this #pragma statement.
Look in to the following example as it assists illustrate what we are trying to discuss here: You
may have a compiler which supports the feature which is known as the loop optimization. This is
a feature which can be invoked as a command-line option or even as the #pragma directive. The
following is the line, which can assist the programmer implement this feature while using the
#pragma directive simply by inserting in appropriately into his code:
#pragma loop_opt(on)
In order to turn off the loop optimization, one inserts the following line into his code:
#pragma loop_opt(off)

S-ar putea să vă placă și