Inhalt
Topic:.applComplAdapt.
Topic:.complAdapt.
Topic:.applstdef.
The applstdef_emC.h
is that headefile, which should included firstly in any source. It includes compl_adaption.h
and contains all settings to determine the common behavior of the application. Therewith the sources of an application can
be written for several applications, several systems and can be reused.
Topic:.applComplAdapt.necessity.
C-Sources are used and should be used in several projects and environments in unchanged form. But often there are incompatibilities especially while using user-defined types for fixed-with integer types (for example INT32) or other language-special (not application-special) details. Another problem are some incompatibilities between C++- and C-language. Often sources should be deployed in C-environments but should be reused in C++ too.
Prevent "#ifdef MyPlatform" in applications
The conditional compilation is a often-used construct to avoid incompatibilities. For example a sequence of inline-assembling for the target-platform is fenced and replaced by a proper expression for a simulation environment. But in re-used sources such project- and platform-specific conditionals causes a distension of code for all possibilities. Such a source-code isn't able to read far. The source-code should be changed, a new revision should created, only because a next platform-project-condition is incompatible with the current conditionals.
The better way to do is: Using of a macro in the common sources, defining the macro in a substantial platform-project-specific header and include that header.
Usage of a platform- or application-specific headers in more as one appearances in several directories
The principle is: An unchanged re-used header- or C-file includes a header by name. The content of the included header should depend on the target platform etc. It is possible to have more as one header with the same file-name, but located in several directories. The platform is associated in any case either with the specific compiler or with specific make files. In the make-file or as command-line-options for the compiler, the include-path is specified. The include-path should refer to that proper directories, where the correct platform-depending header is located. In this kind commonly written sources are compiled with platform-depending properties. That is the philosopher's stone.
The behaviour of deep inner levels of code may be different depending on application decisions.
For example the kind of error handling is a decision of the application:
Using the try-catch approach (C++ compiler necessary)?
C-like try-catch with longjmp?
Check error codes and error return values in the application?
Abort of the application on error?
Possibilities of debug on error situations?
The application cannot use the C++ try-catch principle, if the target compiler does not support it. On the other hand for
the target it may be ok to show a failure and stop execution. But for some error reasons while developing the software on
the PC the try-catch concept is nice to have it. But the sources should be unchanged and should not contain a #ifdef __PLATFORM
-harp.
Defining the approach in the ..applstdef_emC.h and including that helps, see example for exception handling in TODO.
There are two headers to define application behaviour and platform specifics:
applstdef_emC.h
_ It shall be for the user/application to decide which general behavior should be used. For example kind exception handling
or behavior on failure. This header can be adapted by the application and is part of the applications sources (from the template)
then. For some standard behaviors some specificities are contained in precast headers inside incApplSpecific/*
directories. One of them should then be included in the compiler include path.
compl_adaption.h
: Adaption of some definitions to the target compiler. For example fixed sized integer. This header should be adapted to the
compiler and may be the situation relating to other system headers. Especially if simulink generated code is used, simulink
offers its own header family which should be regarded. For some compiler and the simulink situation some specificities are
contained in incComplSpecific/*
directories. One of them should be included in the compiler include path too.
Topic:.applComplAdapt.variants.
The
#inclue <applstdef_emC.h>
should be the first include line of any header file and therewith the first include for any source file too. Therewith the behavior of the application is determined with platform independing sources.
The applstdef_emC.h
includes the compl_adaption.h
in its first lines. Both files are existend more as one time in different directories for the different target systems and
applications.
The compl_adaption.h
should be existent exact one time for any platform (target and PC) or more as one time for the same platform for different
conditions. Especially using Simulink generated sources requires another definition of the basic types, therewith a simulink-specific
compl_adaption.h
for the platform is necessary.
The applstdef_emC.h
should be existing either 1 time inside the applications files, or more as one time if the application should be compiled
with other conditions on PC, target and several targets or several specifications on the same target. Especially the error
handling can be different on test on PC (for example with exception handling for error debug, with C++ compilation) and the
target ( for example run as possible though errors, or abort on any error, with less footprint).
For some situations pre-build variants of applstdef_emC.h
are existent in the pool of emC sources in the directory emC/incApplSpecific
which can be immediately used or used as template:
FwConvC32
: using 32 bit compilation on PC or target, using C++ compiler but with C-style, but use the try-catch capability of C++
FwConvC32_NoExc
: using 32 bit compilation on PC or target, mybe using C++ or C-compiler without Exception handling but with ThreadContext
capability. The software should be tested well already. This is to check software on PC for non-exception-handling setting.
FwConvCpp32, FwConvCpp64
: Using C++ classes in some sources which offers a C and a C++ interface, 32 and 64 bit Platform
TagetNumericSimple
: The sources do not use neither the Reflection mechanism (see TODO) nor dynamic operation call nor exception nor threadcontext capabilities. This is the setting for a well test software which
should run on a small footprint target processor or a DSP processor. Note that the sources can be tested on PC with all the
capabilities. Only the target compilation reduces the overhead from view of the cheep and tested target.
stdef_SmlkAccelerator
Setting for sources from Simulink code generation which are compiled for the accelerator mode.
stdef_SmlkSfunc
Settings for sources for Simulink S-Functions without exception handling (ordinary S-Functions)
stdef_SmlkSfuncExcH
Settings for sources for Simulink S-Functions with exception handling. That is for S-function-functinality which can fault.
The exceptions can be catched. The usage of C++ compilation is necessary therefore.
Topic:.complAdapt.int32.
In the far past C does not define integer types with a defined bit width. The thinking and approach in the former time was:
The register width of the processor is important. That is the int
type.
Algorithm written for the highly developed and expensive 32 bit machines may not be used properly for 16-bit machines.
What about 24-bit-machines. For that int
has the register width of 24 bit.
It is better to have a short
integer for less memory consumption (for arrays) and a long
integer which is 32 bit for 16-bit.machines.
That was adequate to the situation of non-microprocessor-computers of the 1960- and 1970-years. Because C was used for microprocessors with 16 and 32 bit register width with flexible registers, the decision that was made for C is no more adequate. As a work arround all users have defined their own fixed size int types with slightly different notations in very different header files. For the own application it is a perfect world, without thinking outside the box.
But if sources are reused, build applications from different sources, compatibity problems have to be managed, usually with hand written specific adapted solutions.
The C99 standard has defined these types (int32_t
etc) 10 years after its became necessary, and this standard was not considered 10 or 20 years after its definition. This
is the situation. The last one is not a problem of disrespect, it is a problem of compatibility.
Lets show an example:
Simulink defines int32_T
via typedef
and uses it in its generated sources. This is the case with version 2016a (and further).
The user wants to use int32_t
according to C99 in its environment.
A pointer should be delivered:
//simulink generated code (shortend): void generatedFunction(int32_T* refOutput); //using source: int32_t myOutput =0; generatedFunction(&myOutput);
This results in a pointer type error on a C++ compiler or a warning. The types are indentically for the users eye. The int32_T
is written with typedef
in the generated simulink header rtwtypes.h
. and the int32_t
is defined via typedef
in the stdint.h
header in the C99 environment. Both are independent and not compatible.
What should be done?
An unchecked cast of all usages on user level is not recommended!
Using the simulink types in the environment application is the recommended option of the Mathworks company. But this means that all sources, re-used, from another team or supplier should be subordinated under the Simulink style. Subordinate under C99 may be ok, but subordinate under all regulations of all parts of a software is not possible.
The possible and convenient decision is:
All parts of software uses that notation which is given. Do not change it. It may be the simulink convention, the C99 one or an own one.
The compl_adaption.h
should define all known and used integer fixed size types. In this situation (application uses simlink) the definded types
should base on the Simulink types with #include <rtwtypes.h>
from the Simulink generated code. That is necessary because the code generated secondary sources from Simulink should not
be changed.
The C99 types should be defined based on the Simulink types too:
#define int32_t int32_T
Therefore all fixed size types are compatible.
The including of the stdint.h
which defines the C99 types should be prevented. This is possible with defining the guard of that header. Including the stdint.h
results in compiler errors because the same type are defined twice.
All other user specific files which defines adequate types should be prevented from including too, for the same reason.
The definitions of all the fixed point types should be done via #define
, not via typedef
. Therefore it is possible to do #undef
, this may be necessary in some source situations.
For the user's sources it means:
#include <compl_adaption.h>
immediately as the first include or better #include <applstdef_emC.h>
in any source. It may be indirectly done already via other emC headers.
Do not include stdint.h
in the users sources. The C99 types are already defined via the compl_adaption.h
.
Do not include the other definitions of fixed sized int types, trash it in favor of the compl_adaption.h
. The simplest form is: #include <compl_adaption.h>
or better include <applstdef_emC.h
in the header instead the own definitions.
The majority of the user's sources which uses the specific fixed size int types can stay unchanged, no extra work to do!
Topic:.complAdapt..
The X86 platform (Intel) or adequate offers a byte oriented access. Therefore an int8_t
is possible, an int16_t
is well defined, and struct
can be packed.
But this is a unrealistic world. What really happens:
Some processors especially DSP processors from Analog Devices have only 32 bit memory access.
Some other processors cannot access the memory in a atomic way for 32 bit access on non-4-aligned address and a 16 bit access on a non-2-aligend address. A 16-bit-access may or may not be supported as atomic. It depends on the existence of two hardware write signals of the processor for the high and low memory bank and the usage of 16-bit-wide RAM circuits.
Therefore the packed struct with a mix of int_8
and int32
etc. with an arbitrarily mix is not possible. The compiler may produce the necessary fill-bytes on machine level for packing
which may be unexpected by the user who has no information about the hardware.
Access to hardware ports for example in an FPGA circuit is done in the hardware defined fashion (depending on the FPGA design or hardware layout).
The memory on PC is accessed usually 32-bit-width, or maybe 64-bit-width. Some software force a 8-Byte alignment for struct
definitions. A struct
will be filled with dummy bytes if it is not 8-byte-aligned. This cannot be changed for pre-compiled sources (in libraries
or tools).
The atomic access to memory see TODO needs aligned data.
What should be done in the user software:
If the hardware has a fixed size, for example 32-bit-width access channel, maybe a Dual-Port-RAM or a FPGA memory range, it should be regarded in the data interface definitions of the user software, of course. This software can run on a more flexible platform (PC), it is compatible. The special requirements of memory access should have priority.
In struct definition only fixed-sized integer types should be used. Else the memory layout tends to be undefined.
In a struct
definition any element should be placed to a relative address which is able to divide by its length. See the following example.
A struct definition should have a 4-byte- or better 8-byte-aligend length. It may be important if that struct data should be compatible used on a 64-bit platform with a 8-byte alignment necessary for optimized memory access.
The correct alignment should be done manually using proper order of data elements and using dedicated filling elements to save the alignment rules. That isn't really a disadvantage.
For example a struct should hold different values. The following is proper:
typedef MyStruct_t { float f; int16_t s1, s2; //pos 4, 6 double d; //pos 8 int8_t b1, b2 b3; //pos 16..18 int8_t __spare1__; //fill pos 19 int32_t __spare2__; //fill pos 20 till length = 24 = 3*8 } MyStruct;
The following order is not proper:
typedef MyStruct_t { int8_t b1, b2 b3; int16_t s1; //pos 3 may be aligend to 4 float f; //pos 5 o5 6 maybe aligned to 8 int16_t s2; //pos maybe 12 double d; //pos maybe 14 maybe aligned to 16 } MyStruct;
Because:
The s1
and s2
have an relative address 3 and 5. The struct instance may (presumably) organized on a 4-aligned address. A simple processor
cannot access 16 bit at this location. Therefore the compiler will add a implicit spare byte, which is unknown in the source.
Though a spare byte is included on position 3, the float f
will start on position 6. Reading 32 bit on this location in a 32 bit memory layout requires 2 16-bit accesses and a maschine
instruction to combine this 2 parts. The compiler may insert 2 spare bytes to optimize that.
If this data is only used internally and is never debugged on machine level, only some more memory is necessary. But if this data is sent via any communication as memory image, its structure cannot be predicted because it depends on the target hardware, the compiler and its options.
In contrast the proper struct
is able to carry out in exact the written organization by all compilers and all hardware. Only one thing should be considered:
use struct pack
(set packed structures, with pragma, or with a compiler option). Else fill bytes are arranged after any element which's size
is not able to be divided by 4 for a 32 bit processor.
If the memory is only organied with 32-bit access especially for hardware or Dual Port Memory access, a struct
should not use int16_t
or int8_t
Topic:.complAdapt..
Some DSPs (Digital Signal Processor) from Analog Devices have only 32 bit regitster and count the memory address in this 32 bit step. 1 address step is 32 bit,
sizeof(int)==1
. This is considered more effecient for the hardware.
If this processor stores a value which needs less than 32 bit, 32 bits are used nevertheless. There are 3 problems which messes up the programming in C:
The sizeof(int)
or sizeof(float)
is == 1
. If an algorithmus uses the obvious fact that sizeof(int) > 1
, the algorithm fails for this processor. Some compilers, for example the VisualDSP 5.0 have an compiler option -char-size-8
which results in sizeof(int) == 4
. But this is not really helpful. This leads to some discrepances, because the address step remains 1 for 32-bit-int locations.
It is determined by the hardware and cannot be changed by the compiler.
An int16_t
or short
type really uses 32 bit for storing (1 memory location) and also calculation. 32767 +3
results in 32770
in contrast to the usually expected -32766
. If only the lower 16 bits are used from this location, the result is equal. But see example below with the conversion to
float.
Character and strings (literal) are stored in 1 character per memory location, it means 32 bit per character.
It is possible to access the same memory from either the 32 bit DSP side as well as by another processor which can access bytewise. It is a special construct in hardware. The simplest form is a Dual Port Memory chip such as IDT70P269 from Integrated Device Technology. But a Dual Port Memory area as part of an FPGA (Field Programmable Gate Array chip) is similar. Another possibility is a hardware access to the RAM from another hardware bus than the DSP bus with an hardware direct memory access (do not confuse with DMA on the DSP chip itself). This should be regarded in the hardware layout of the board.
Topic:.complAdapt...
If the DSP stores character Strings in 32 bit per char, the memory layout for
char [8] = "abcd";
looks like
0x00000065 0x00000066 0x00000067 0x00000068 0x00000000 0x00000000 0x00000000 0x00000000
That memory layout is seen from the other side, from the byte accessing processor too, if 32 bit are mapped to 32 bit. If
that memory content will be read as char const*
as string information, only the "a"
is seen, because after them a 0-bytes follows. That is not proper to use.
On the DSP processor usual no complex string processing will be done. But the DSP processor may report some errors in string
form, able to read in english human language which is more comprehensible as only numbers for error reports. For example The
message: "faulty value read: -99.999"
should be shown. The message my be sent via Socket communication or presented in an display which is driven by the byte-processor.
On that side no special effort should be done. It confuses the software.
On the DSP side, for that example, an constant text "faulty value read: "
is to combine with a conversion of a number to a string. It may be done with special programming only for this case only
for DSP but able to test in compatible form on PC. The sprintf(...)
may not proper to use.
The challange is: storing the string in a proper way. The only convenient way to do that compatible for the 32-bit-DSP and a normal test platform (PC) is:
#define Char4 unsigned int #define CHAR_4(a,b,c,d) (a + (((char4)b)<<8) + (((char4)c)<<16) + (((char4)d)<<24) ) Char4 msg_faultyRead[] = { CHAR_4('f','a','u','l'), CHAR_4('t','y',' ','v') , CHAR_4('a','l','u','e'), CHAR_4(' ','r','e','a') , CHAR_4('d',':',' ', 0 ) };
This is or may be proper read- and writable in the source, and produces a constant string literal which is packed in memory
and is seen as normal String literal on the other side of the byte-accessing processor in little endian. To combine the message
with a value the string should be copied via memcpy
and the value should be converted with a special routine such as:
char4 msgBuffer[20]; memcpy(msgBuffer, msg_faultyRead, sizeof(msg_faultyRead)); appendFormatedFloat_Char4_emC(msgBuffer, sizeof(msgBuffer), 3,3,val);
The last one routine searches the first 0-bytepart in the char4
and appends the digits from the given float val
. This routine is not complex, it can be found in the emc sources.
It seems to existing better solutions, but they have problems:
The Compiler Visual DSP from Analog Devices testet in Version 5.0 has an compiler option -char-size-8
With this setting a string literal definition is done in the proper way:
char msg_faultyRead[] = "faulty value to read:";
produces the expected memory layout, packed characters, 4 per memory location. It is proper 4 little endian. But if the byte-access
processor works with big endian, it is not able to use. In opposite, the macro above CHAR_4
is able to adapt for that.
But the usage of -char-size-8
produces other disadvantages, for example the sizeof(int)==4
but nevertheless for address calculations a size of 1 for integer should be used unchallengeable. It may be a problem by
the version 5.0, a later version may be better (?).
For that compiler a mix of compiled sources with string constants and -char-size-8
with other sources with -char-size-32
for a proper work should be used. That needs quite knowledge of compiler options for the C programmer, and produces more
complexity.
There is a possibility in C and C++ too, so named multicharacter contants in form
int32 msg_faultyRead[] = { 'faul', 'ty v', 'alue', ' rea', 'd: \0'};
It looks like better than the CHAR_4
macro. But nevertheless is does run only with -char-size-8
for that compiler Visual DSP 5.0, but it is necessary especially for -char-size-32
. Some research in the internet to that problem, for example https://en.cppreference.com/w/c/language/character_constant or https://zipcon.net/~swhite/docs/computers/languages/c_multi-char_const.html deduce, that is a C standard, but not compatible for all compilers. It depends on too much compiler specialities.
Topic:.complAdapt...
If a 16 bit value is used on a Analog Devices DSP, the value is stored in a 32 bit location. The higher 16 bits may be able to access, but they are not accessed by a 16-bit-operation if this operation masks only the 16 bit. For example an angle from -180° to 179.99° (degree) is stored in 16 bit (0x8000 is -180°). To convert it in float, only 16 bits should be used to get a circular angle in range -180°..+179.99°.
Follow the algorithm with such an circular angle value. It is stored in an integer cell because the overflow produces the expected circular behavior: -179° - 3° should result in +178°.
int16_t angle; //an angle in range -180..179.99° in 16 bit angle += anglediff; //because the angle is really 32 bit, //the value of angle may be outside -180...179° float anglef = ((float)angle) * 180.0f / 32768.0f;
For a platform which knows the really 16 bit integer type this result is correct. For the DSP the resulting float value shows the overflowed value too, in range outside -180..180°. It is an unexpected behavior as result of this simple C source lines on an 32-bit-DSP processor if that hardware-depending behavior is not known.
The result is proper with the following line:
float anglef = ((float)(((int32_t)angle)<<16) * (180.0f / 2147483648.0f);
The really 32 bit angle is converted explicitely to a 32 bit value with left shift, to keep the same range. Then the result is correct. On that point of programming the hardware property of the processor should be well-known. For this reason the C99 standard defines some values like
#define INT16_MAX 2147483647
This value defined for the target processor documents that the type int16_t can be hold an value up to 2147483647 which is
32 bit width. But the definition of the number of bits for each type is not contained in the C99 standard. That value should
be known. Therefore it is defined in the compl_adaption.h
with
#define INT16_NROFBITS 32
With this information the conversion line can be written as
float anglef = ((float)(angle<<(INT16_NROFBITS -16 )) * (180.0f / (-INT16_MIN));
For the 32-bit-DSP processor this results in the <<16
but for a 16 bit processor it results in <<0
. The constant values are calculated from the compiler. The (value << 0) is optimized by the compiler, no shift is done because
the <<0
is known on compile time. It means, both compilation results are functinally correct and optimized in run time.
Note: The C99 standard does not define the number of bits for each type, only the range. The compl_adaption.h
should define the number of bits too.
Topic:.complAdapt.compl_adaption.
The compl_adaption.h
header file should contain all definitions to work with C at user level without knowledge and consideration of the target
system and compiler specialities. Note: The application specific mommitments should not be part of that file. That is contained
in the applstdef_emC.h
instead, see TODO.
Topic:.complAdapt.compl_adaption..
The compl_adaption.h
header should define the following things proper for the compiler situation and the situation of other system includes.
int8_t int16_t int32_t int64_t uint8_t uint16_t uint32_t uint64_t
This are the identifiers of the fixed-width integer types of the C99-Standard. If the compiler supports C99 and other system
headers are accordingly, that need not be defined. But regard the following situation:
int8 int16 int32 int64 uint8 uint16 uint64
This are the adequate identifiers of the fixed-width integer types, better able to read, usual in using, but not standardized.
UINT32
etc.: Often adequate identifiers for the fixed.width integer are usual in use. It isn't a incompatibility to do so because
the compiler understands the well-defined types as the same ones. If such user types are usual used, it should be defined
in the <os_types_def.h> too. That allows to combine user-headers with all the usual fixed-width integer types without restrictions
and incompatibility, if a compilation unit includes <os_types_def.h>. If the compilation unit doesn't include <os_types_def.h>
but instead another header in the users space, where that types are defined, it isn't a problem. We assume, that UINT32
means a type which represents a integer value as 32 bit unsigned.
bool8 bool16
: That both type definitions can be used in the user's application especially in struct-definition to define fixed-width boolean
variables. Usual boolean values are stored as int
internally. But the int
type may have 16 or 32 bit. Therefore a bool
-Type should not be used in struct-definitions, which are used for data-interchanging. Instead, either bool8
or bool16
should be used there. Remark, a boolean value can be stored in 1 bit of a bitfield. But the bitfield should be declared as
int16
int32
or adequate too.
char8 char16
A char
-type is a byte often, but not guaranteed. A char8
is a byte guaranteed. If data for interchanging are declared in a struct, the char8
-type should be used. The char16
-type presents a character in 16 bits. Usual it is UTF16-encoding. This problem is resolved often with os-special types like
WCHAR
etc. But all of this definitions are specials for some compiler platforms. Follow this commonly notification! The platform-specific
WCHAR
etc. are left compatible with proper definitions. In the user sources only that platform-independent types should be used.
All types should be defined using a #define-statement, not using a typedef
. The reason is: Sometimes (especially for the operation-system-adaption layer) other header-files should be included, which
defines the same identifier in a adequate way (compatible for usage in compiling) but incompatible while compiling the definitions
itself. If the first-included <os_types_def.h> defines the types with #define
, an #undef
statement can be written before including the other necessary header-files. But ff a typedef
is used in <os_types_def.h>, the difference can only be resolved by changing the other headerfiles (remove the unecessary
definitions). But the other included headerfiles are originals, which should not be changed often. Typical it may be necessary
to write:
#include <os_types_def.h> #include "someHeadersOfUser" //using definition of os_types_def.h #undef int32 #undef uint32 #undef int16 #undef uint16 #include <specialPlatformHeader.h> //defines this types in another way but compatible ...implementation using the platformheader.h ...and the someHeadersOfUser including os_types_def.h-properties
This construct is not typical for the application-part of the software. The application parts should not depend on special platform headers. But it is typical for the os-adaption layer and for drivers, which have to be use the <specialPlatformHeader.h>.
Topic:.complAdapt.compl_adaption..
The C-language-standard doesn't define all necessities of types. Independing of the used compiler and options (C/C++), the following types should be present for usage:
bool true false
: The boolean-type and its constants are defined well in C++. It is a internally representation of values for true and false.
In C-applications this type and its values should be able to use in the same meaning. The bool
-type should be presented as an int
in C often. The value for false
is 0
commonly. The true
-value have to be the same as the presentation of a result of comparison for the current compiler. Usual a !false
presents the true-value correctly.
Topic:.complAdapt.compl_adaption..
Two defined labels allows conditional compiling in user-sources. The conditional compiling is not recommended. But if it is necessary or desired, it should be done in a unified schema. The defines are platform- and maybe project-depending. They should be queried only in a positive way (#ifdef) not negative (#ifndef). For usage on Windows with Visual Studio 6, the labels are named:
#define __OS_IS_WINDOWS__ #define __COMPILER_IS_MSC6__
Using that both labels, a special user routine can query for example:
#ifdef __OS_IS_WINDOWS__ //some statements for simulation .... #endif
The distinction between os- and compiler label is: Usual the os-platform should be query. Only in special cases the compiler may be query, maybe for specific examination of errors etc.
That labels should not be used to force conditional compilation for commonly problems for example little/big endian, alignment requests etc.
Topic:.complAdapt.compl_adaption..
In generally, any warning may be a hint to an error. But some warnings are ignorable. If such warnings are switched off, the critical warnings are visible better.
Warnings can be switched off individually by pragmas. The commonly valid pragmas to disable uncritical warnings should be included in the os_types_def.h. But only commonly and uncritical! The os_types_def.h can be adapted individually. In this case an individual setting of warning-pragma for a C-compiling project is possible.
The following example shows some warnings, which are switch off for the microsoft-visual-studio-compiler:
#pragma warning(disable:4100) //unused argument #pragma warning(disable:4127) //conditional expression is constant #pragma warning(disable:4214) //nonstandard extension used : bit field types other than int #pragma warning(disable:4189) //local variable is initialized but not referenced #pragma warning(disable:4201) //nonstandard extension used : nameless struct/union
Topic:.complAdapt.compl_adaption..
OSAL_BIGENDIAN
: If this label is defined, the processor is a big-endian type. Some conditional compilation test this label to produce the
correct access sequence. At user (high-) -level the big/little endian property should not be regarded. Only system routines
should distinguish.
OSAL_LITTLEENDIAN
: It is the opposite label for little endian.
OSAL_MEMWORDBOUND
: This label should be defined, if the processor can't expand a variable over memory-word-bounds. For example if the processor
is a 32-bit type, and a 32-bit-value is addressed by a odd memory address. In this case the value is located in 2 memory-words,
One word contain in some bits the lo-value, the second word contains the high-bits. For a X86-architecture this isn't a problem,
because that processor architecture supports the composition of the variable from any bytes in memory. But some processors
have a problem with such constellations. The compiler itself prevent a splitting of variables usual by inserting fill-bytes
(alignment-problem). But if a data stream comes with memory-bound-split values, an address calculation followed by a pointer
casting and access (*(int32*)(calculatedAddress)
) causes errors. Then only more as one access to the memory and the composition of the value can help. This should be done
by lo-level routines maybe in the users space too. For this routines, this label is used to control the code. The code of
the routines can be written platform-independent then.
MemUnit
: The MemUnit is a type which presents 1 word in memory. Most of the processors addresses the memory in bytes. Then this identifier
should be defined as
#define MemUnit char
That is usual but not valid in any case. Some processor architectures are oriented to full-integer and float numerical information and saving hardware-effort for the memory access. Therefore they address the memory in 32-bit-words for example. In that case character values are not presented effective, but this may not be a problem. But the MemUnit is a int then:
#define MemUnit int
The user can use a MemUnit*
pointer for address calculations. Mostly a char*
is used instead in user-sources, submitting that a memory-word is a byte. But that is wrong in some cases.
BYTE_IN_MemUnit
: A constant which describes how many bytes (not address-steps) are contained in a MemUnit (1 address-step). Usual it is defined
as
#define BYTE_IN_MemUnit 1
But in the case of an abbreviated MemUnit the number of bytes per MemUnit may be 2 or 4. A Byte is 8 bit always. This constant is necessary to calculate space while interchanging of data for example via a Dual-Port-RAM, where a processor with another memory address-mechanism is the partner.
intPTR
: defines a integer type, that can contain an address. It allows to store a memory address (a pointer to data) and handle
(transfer) it as integer. The address calculation inside the processor space is the same as calculation with a MemUnit*
, but for calculation of addresses for another processor, the usage of MemUnit*
fails. The intPtr should present the usual used addressing mode. For a 16-bit-Processor with a more-width address space (more
than 64 k words) it may be a simple int, including 16 bit. That is okay, if the free memory space for data is only located
in a 64-k-range, all other memory spaces are for code, file system etc. But if the user-useable space is greater than 64k,
the intPTR
should be defined for example as int32
.
For 32-bit-architectures it may be possible that an address consists of a 32-bit-address and an additional segment information.
In that case a intPTR
may need to contain the segment too, it means it needs more than 32 bit. But in most of cases the address can be stored in
32 bit. In that kind it may be possible that a address will be condensed to 32 bit by truncation of (unused) address bits.
Special operation are possible to do that. Then the intPTR
should present the condensed address for commonly usage.
Topic:.complAdapt.compl_adaption..
This structure is used to hold a pointer and an associated integer value to return it per value. It should be organized in a kind, that forces the usage of registers for the returned values. Normally, struct data, which are returned per value are copied from the stack in another stack location while execution the return machine instructions, after them they may be copied a second time into its destination struct-variable if the return-value is assigned to any one. The usage of register is much more effective. Because the usage of register may depend on some compiler specialities, the definition of this base struct is organized in this header. Frequently the definition of this struct is equal like shown in the example. But sometimes special constructs may be necessary.
The struct is defined as (pattern, frequently form)
typedef struct OS_PtrValue_t { char* ptr__; int32 value__; }OS_PtrValue;
The pointer may be a void*
in theory, but a char* allows to visit a referenced string while debugging. It may be opportune too to write
typedef struct OS_PtrValue_t { union{ char* c; int32Array* a;} ptr__; int32 value__; }OS_PtrValue;
to see a int-array while debugging. - It may be able to adjust, which int-type is stored and in which form the pointer is
stored (segmentation? see intPTR
). Especially for simple 16-bit-Processors a proper definition should be find out.
There are defined some macros to access values and build constans:
CONST_OS_PtrValue(PTR, VAL)
This is a macro to build a constant expression to initialize, usual defined with { (char*) PTR, (int32)VAL}
.
value_OS_PtrValue(THIS)
: Gets the value, usual ((THIS).value__)
PTR_OS_PtrValue(THIS, TYPE)
: Gets the pointer in form of the given type. TYPE is the base type, not the pointer (without *
), usual ((TYPE*)(THIS).ptr__)
set_OS_PtrValue(THIS, PTR, INT)
: Sets all values, returns nothing (not able to use in expressions), usual { (THIS).ptr__ = (char*)(PTR); (THIS).value__ = (INT); }
/NOTE: use a local variable to prevent twice call if SRC is a complex expression.
copy_OS_PtrValue(THIS, SRC)
: copy another OS_PtrValue into it. It is more simple as getting values from source and calling the set-macro. Hint: The SRC
have to be accessed only one time, so a subroutine call can be written there. Usual: { OS_PtrValue const* src__ = &(SRC); (THIS).ptr__ = src__->ptr__; (THIS).value__ = src__->value__; }
setValue_OS_PtrValue(THIS, INT)
: Macro to set a value, returns nothing (not able to use in expressions), usual { (THIS).value__ = (INT); }
setPtr_OS_PtrValue(THIS, PTR)
: Macro to set a pointer, returns nothing (not able to use in expressions), usual { (THIS).ptr__ = (char*)(PTR); }
.
Topic:.complAdapt.content.
The header-file <os_types_def_common.h> should be included normally in <os_types_def.h> It contains definitions, which are valid and proper for all operation systems and compiler variants, but necessary respectively recommended at low level programming in C and C++. The OSAL-source package contains a version of this header-file for commonly usage. But it is possible for special requirements to adjusts nevertheless some properties, by including a changed variant of this file which is contained in the user's source-space. As a rule, the originally version should be used.
Topic:.complAdapt.content..
Generally, all sources should be use-able both for C and C++ compiling. It is an advantage that that programming languages
are slightly compatible. The extern "C"
-expression allows the usage of C-compiled library-parts in a C++-environment. But the extern "C"
expression is understand only in C++-compiling. Usual headers of C-like-functions are encapsulated in
#ifdef __cplusplus__ extern "C" { #endif //...the definitions of this header #ifdef __cplusplus__ } //extern "C" #endif
This form allows the usage of the same header for C-compiling, without activating of this definition, and for C++-compiling.
It is also proper to write a extern "C"
to any extern
declaration. In some cases a extern "C"
-declaration is helpfull in C++, but in C there shouldn't be a extern
instead. For example typedef
can be designated with extern "C"
for the C++-compilation to designate the type as C-type. But it can't be replaced by a extern
in C.
The effect of extern "C"
is: Labels for linking are build in the simply C-manner: Functions are designated with its simple name, usual with a prefix-_
. The label doesn't depend on the function-signature (argument types). The same effect is taken for forward-declared variables.
In opposite, in C++ the labels for linking are build implicitly with some additional type information, argument types for
functions respectively methods, const or volatile information for variables etc. It is a advantage in C++, that the labels
for linking contains some additional information about the element, not only the simple name. Therefore incompatibilities
are able to detect in link time. But this advantage prevents the compatibility to C, and it is more difficult to correct errors,
which are checked more strong than necessary in C++. Therfore a extern "C"
-declaration in C++ makes sense in some cases.
To support a simple usage of extern "C" in sources, which are used both for C and C++, the following macros are declared:
extern_C
: This macro is defined for C++-compiling as extern "C"
and for C-compiling as extern
. It designates extern
declarations especially for variables and static struct-incarnations.
C_TYPE
: This macro is defined for C++-compiling as extern "C"
but it is left empty for C-compilations. It should be used before a typedef
definition, especially for function-pointers, but also for the definition of struct-types.
extern_C_BLOCK_ _END_extern_C_BLOCK
: This macros are defined as extern "C" {
and }
for C++-compilation and left empty for C-compilation. It allows to write C-manner-definitions of a whole header (-block)
in form
#include <dependHeaders.h> extern_C_BLOCK_ some_definitions _END_extern_C_BLOCK
Topic:.complAdapt.noContent.
Because the <os_types_def.h> is included in any C-file, some definitions which are used as basics for an application are inclined to find entrance in this file. But the effect or disadvantage is: The <os_types_def.h> is not more a file for the platform and compiler but for the application. It contains to much different thinks. Therefore a re-using for other applications with the adequate platform is aggravated. Therefore:
Application-specific thinks shouldn't contain in this file. For example:
Number of available ports and IP-addresses for Ehernet-Communication: This is a theme for the Ethernet-driver!
Special hardware equipment properties: Theme for the special drivers!
Commonly definitions which has not a reference to the platform shouldn't contain, for example:
Definition of Pi (3.1415926...), it should contain in a <math..> header.
The CRuntimeJavalike-platform-sources contain some definitions which are depending on the choice C/C++, the usage of 32-bit-integer for String-Length, modi of memory allocations etc. This thinks are defined in another header <fw_Platform_conventions.h> respectively <platformJc.h>. It is possible to build several appearances of CRuntimeJavalike-machinecode for the same base os/hardware-platform, therefore with the same <os_types_def.h>. On the other hand several os/hardware-platforms can use the same <platformJc.h> to defined adequate properties for Jc-appearances, but with different <os_types_def.h>-basic definitions. Therefore both headerfiles are separated together.
Topic:.applstdef.content.
The
#inclue <applstdef_emC.h>
should be the first include line of any header file and therewith the first include for any source file too. Therewith the behavior of the application is determined with platform independing sources.
The applstdef_emC.h
includes the compl_adaption.h
in its first lines. Both files are existend more as one time in different directories for the different target systems and
applications.
Topic:.applstdef.content..
Some compiler switches are defined here. They determine general behavior.
Using of reflection:
/**With this compiler switch the reflection should not ... */ #define __DONOTUSE_REFLECTION__
The Reflection mechanism can be used in general. But for specific platforms reflection should not be used, because they need some memory space and String processing. In the user sources with this compiler switch reflections can be deselected for compilation.
Using of C++ parts of several sources:
/**The compiler switch __CPLUSPLUSJcpp should set only ...*/ //#define __CPLUSPLUSJcpp
This switch should be set (uncomment it) only if the C++ parts of sources, which offers C and C++, are used in the application. The application is a C++ source then. Especially it is for Java2C-generated sources, but usefull for other too. If the switch is not set, the C++ parts of some sources which are guarded with this compiler switch are prohibited.
Topic:.applstdef.content..
In this order now
include <compl_adaption.h>
include <OSAL/os_types_def_common.h>
are included yet. The switches above can be used in that files.
Topic:.applstdef.content..
The application will be more simple if more complex String processing capabilities and the possible super class of all data should not be used. This is for small footprint applications. If the application sources need that capabilities, the following lines should be commented.
**Including this file the ObjectJc.h is not included, */
include <source/FwConv_h/ObjectJc_simple.h>
If that header is included, the struct ObjectJc
is defined as simple struct with only one int32 element, the ident-number of data. Therewith Reflection cannot be used and
virtual operations cannot be used. See content of that file. The invocation of initReflection_ObjectJc(... reflection_...)
is possible, but the forward declared reflection instance is not used. Hence it should not be existing on link time. It implies
the setting of the compiler switches __DONOTUSE_REFLECTION__
and __NoCharSeqJcCapabilities__
.
/**Define __NoCharSeqJcCapabilities__ only for simple systems ... */ #define __NoCharSeqJcCapabilities__
That compilerswitch controls the compilation especially of parts of the file fw_String.c
. A CharSeq can provide a String as sequence of chars with overridden (dynamic linked) operations charAt(obj, pos)
and length(obj)
. For that capability some more code should be present on linktime, especially the access to the virtual table of the instance.
Setting this compilerswitch the parts of code using that are excluded from compilation, therefore there are not necessary
on linktime. For the application it means that the feature of vitual access to CharSeq operations cannot be used. It is proper
for simple systems without complex String functinality. See TODO StringJc.
Topic:.applstdef.content..
In C++ (and Java, C# etc) languages the concept of try-catch-throw is established (Exception handling). This is a some more better concept then testing return values of each calling routine or testing errno
like it is usual in C. The advantage of try-catch-throw is: In normal programming the regarding of all possible error situations
in all levels of operations are not need. Only in that source where an error should be tested by algorithm demands it should
be regarded anyway and invokes a throw
. And, on the opposite side, in sources where any error in any deeper level may be expected and should be handled by the algorithm
demands, a try
and catch
is proper to write.
The necessity of handling error return values in C will be often staggered to a later time, because the core algorithm should be fistly programmed and tested. Then, in later time, the consideration of all possible error situations is too much effort to program, it won't be done considering the time line for development ...
Therefore the try-catch-throw concept is helpfull.
The emC programming style knows three levels of using TRY-CATCH-THROW using macros. The user sources itself are not need to adapt for this levels. The macros are adapted. See exception todo.
Full try-catch-throw needs C++ compilation, uses the try-catch-throw macros of C++ and can handle so named asynchron exceptions (faulty memory access) too. It is able to use on test of the application on PC.
The try-catch-throw concept is able to use in C too using the longjmp. It is proper and it should be the default approach
of error handling since 1970, but it was not published for that. Some comments and using notes to the setjmp.h
are confused. A proper description may be found in http://pubs.opengroup.org/onlinepubs/009695399/functions/longjmp.html. The longjmp concept is proper for exception handling in C language if the compiler supports it in the required kind. Some
compiler for special processors do not do so. For that the longjmp concept unfortunately is not able to use. In C++ language
the way from throw to catch invokes all destructors of data of all calling levels. That's important. In C++ with MS Visual
Studio longjmp runs, it is faster as throw, but it does not invoke the destructors. In C the destructor conept is not known.
Therefore using longjmp is not a problem, if no ressources remain open. The ressources of using levels should be handled and
closed in the CATCH level.
If a program is tested well there is a residual risk that the program executes a THROW. For that it is proper to prevent further
execution of the called functionality, but the software should run furthermore. The THROW macro causes a return
statement therefore, able to use with a proper return value (such an error code). For the calling level it means, it is currently
executed, but it has to test its own conditions to run well. Maybe it should throw too:
//calling level int ret = executesub(...); if(ret < 0) THROW(IllegalArguentException, "failure on executesub()", ret, -1);
The last statement ret < 0
is only executed in the non try-catch-throw mode. It is additonally for that but not essential. It may be not necessary if
the conditions to run are detected in another way.
//called level: int excutesub(...) { if(arrayIndex > sizeArray || arrayIndex < 0) { THROW(IndexOutOfBoundsException, "illegal index", arrayIndex, -1); } myArray[arrayIndex] = ... //access only with correct index!
The THROW
statement either invokes throw
of C++, longjmp
or a log entry and return
.
If try-catch-throw is not used but THROW causes a return, it can write the error message in a log area. That log area can
be used to examine the error behavior on a service session. Usual __FILE__
and __LINE__
should be stored there too. The log entry can be tested on end of execution of a TRY
block. If there is a new log entry, a message to the operator can be given to disclose the given behavior.
The applstdef_emC.h
controls for the application, how the TRY-CATCH-THROW
is used:
#include <Fwc/fw_threadContext.h> //#include <Fwc/fw_Exception.h> #include <Fwc/fw_ExcStacktrcNo.h>
The ThreadContext is a concept of both, protocol the stack levels and provide a thread local memory area. For the exception handling fw_threadContext.h is necessarry for the Stacktrace. In cause of
#include <incApplSpecific/applConv/assert_simpleStop.h>