Search Results

Search found 16218 results on 649 pages for 'compiler errors'.

Page 183/649 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • So can unique_ptr be used safely in stl collections?

    - by DanDan
    I am confused with unique_ptr and rvalue move philosophy. Let's say we have two collections: std::vector<std::auto_ptr<int>> autoCollection; std::vector<std::unique_ptr<int>> uniqueCollection; Now I would expect the following to fail, as there is no telling what the algorithm is doing internally and maybe making internal pivot copies and the like, thus ripping away ownership from the auto_ptr: std::sort(autoCollection.begin(), autoCollection.end()); I get this. And the compiler rightly disallows this happening. But then I do this: std::sort(uniqueCollection.begin(), uniqueCollection.end()); And this compiles. And I do not understand why. I did not think unique_ptrs could be copied. Does this mean a pivot value cannot be taken, so the sort is less efficient? Or is this pivot actually a move, which in fact is as dangerous as the collection of auto_ptrs, and should be disallowed by the compiler? I think I am missing some crucial piece of information, so I eagerly await someone to supply me with the aha! moment.

    Read the article

  • How does virtual inheritance solve the diamond problem?

    - by cambr
    class A { public: void eat(){ cout<<"A";} }; class B: virtual public A { public: void eat(){ cout<<"B";} }; class C: virtual public A { public: void eat(){ cout<<"C";} }; class D: public B,C { public: void eat(){ cout<<"D";} }; int main(){ A *a = new D(); a->eat(); } I understand the diamond problem, and above piece of code does not have that problem. How exatly does virtual inheritance solve the problem? What I understand: When I say A *a = new D();, the compiler wants to know if an object of type D can be assigned to a pointer of type A, but it has two paths that it can follow, but cannot decide by itself. So, how does virtual inheritance resolve the issue (help compiler take the decision)?

    Read the article

  • What are best practices for managing related Cabal packages?

    - by Norman Ramsey
    I'm working on a dataflow-based optimization library written in Haskell. It now seems likely that the library is going to have to be split into two pieces: A core piece with minimal build dependencies; call it hoopl-core. A full piece, call it hoopl, which may have extra dependencies on packages like a prettyprinter, QuickCheck, and so on. The idea is that the Glasgow Haskell Compiler will depend only on hoopl-core, so that it won't be too difficult to bootstrap the compiler. Other compilers will get the extra goodies in hoopl. Package hoopl will depend on hoopl-core. The Debian package tools can build multiple packages from a single source tree. Unfortunately Cabal has not yet reached that level of sophistication. But there must be other library or application designers out there who have similar issues (e.g., one package for a core library, another for a command-line interface, another for a GUI interface). What are current best practices for building and managing multiple related Haskell packages using Cabal?

    Read the article

  • Build failed question - maven - jre or jdk problem

    - by Gandalf StormCrow
    Hi all, I have my JAVA_HOME set to C:\Program Files (x86)\Java\jdk1.6.0_18 After I run maven install I get this message from eclipse: Reason: Unable to locate the Javac Compiler in: C:\Program Files (x86)\Java\jre6\..\lib\tools.jar Please ensure you are using JDK 1.4 or above and not a JRE (the com.sun.tools.javac.Main class is required). In most cases you can change the location of your Java installation by setting the JAVA_HOME environment variable. I'm certain that this is the tricky part Please ensure you are using JDK 1.4 or above and not a JRE When I run configuration its set to JRE6, how do I change it to JDK 1.6 which I have already installed EDIT I even tried to modify the plugin : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> <executable>C:\Program Files (x86)\Java\jdk1.6.0_18\bin</executable> </configuration> </plugin> Still I get the same error Maybe I forgot to say I use eclipse maven plugin .. how can I change from JRE to JDK in eclipse ?

    Read the article

  • Direct invocation vs indirect invocation in C

    - by Mohit Deshpande
    I am new to C and I was reading about how pointers "point" to the address of another variable. So I have tried indirect invocation and direct invocation and received the same results (as any C/C++ developer could have predicted). This is what I did: int cost; int *cost_ptr; int main() { cost_ptr = &cost; //assign pointer to cost cost = 100; //intialize cost with a value printf("\nDirect Access: %d", cost); cost = 0; //reset the value *cost_ptr = 100; printf("\nIndirect Access: %d", *cost_ptr); //some code here return 0; //1 } So I am wondering if indirect invocation with pointers has any advantages over direct invocation or vice-versa. Some advantages/disadvantages could include speed, amount of memory consumed performing the operation (most likely the same but I just wanted to put that out there), safeness (like dangling pointers) , good programming practice, etc. 1Funny thing, I am using the GNU C Compiler (gcc) and it still compiles without the return statement and everything is as expected. Maybe because the C++ compiler will automatically insert the return statement if you forget.

    Read the article

  • Use of const double for intermediate results

    - by Arne
    Hi, I a writing a Simulation program and wondering if the use of const double is of any use when storing intermediate results. Consider this snippet: double DoSomeCalculation(const AcModel &model) { (...) const double V = model.GetVelocity(); const double m = model.GetMass(); const double cos_gamma = cos(model.GetFlightPathAngleRad()); (...) return m*V*cos_gamma*Chi_dot; } Note that the sample is there only to illustrate -- it might not make to much sense from the engineering side of things. The motivation of storing for example cos_gamma in a variable is that this cosine is used many time in other expressions covered by (...) and I feel that the code gets more readable when using cos_gamma rather than cos(model.GetFlightPathAngleRad()) in various expressions. Now the actual is question is this: since I expect the cosine to be the same througout the code section and I actually created the thing only as a placeholder and for convenience I tend to declare it const. Is there a etablished opinion on wether this is good or bad practive or whether it might bite me in the end? Does a compiler make any use of this additional information or am I actually hindering the compiler from performing useful optimizations? Arne

    Read the article

  • Problem: Vectorizing Code with Intel Visual FORTRAN for X64

    - by user313209
    I'm compiling my fortran90 code using Intel Visual FORTRAN on Windows Server 2003 Enterprise X64 Edition. When I compile the code for 32 bit structure and using automatic and manual vectorizing options. The code will be compiled, vectorized. And when I run it on 8 core system the compiled code uses 70% of CPU that shows me that vectorizing is working. But when I compile the code with 64 Bit compiler, it says that the code is vectorized but when I run it it only shows CPU usage of about 12% that is full usage for one core out of 8, so it means that while the compiler says that code is vectorized, vectorization is not working. And it's strange for me because it's on a X64 Edition Windows and I was expecting to see the reverse result. I thought that it should be better to run a code that is compiled for 64 Bit architecture on a 64 bit windows. Anyone have any idea why the compiled code is not able to use the full power of multiple cores for 64 Bit Compiled version? Thanks in advance for your responses.

    Read the article

  • Linking the Linker script file to source code

    - by user304097
    Hello , I am new to GNU compiler. I have a C source code file which contains some structures and variables in which I need to place certain variables at a particular locations. So, I have written a linker script file and used the __ attribute__("SECTION") at variable declaration, in C source code. I am using a GNU compiler (cygwin) to compile the source code and creating a .hex file using -objcopy option, but I am not getting how to link my linker script file at compilation to relocate the variables accordingly. I am attaching the linker script file and the C source file for the reference. Please help me link the linker script file to my source code, while creating the .hex file using GNU. /*linker script file*/ /*defining memory regions*/ MEMORY { base_table_ram : org = 0x00700000, len = 0x00000100 /*base table area for BASE table*/ mem2 : org =0x00800200, len = 0x00000300 /* other structure variables*/ } /*Sections directive definitions*/ SECTIONS { BASE_TABLE : { } > base_table_ram GROUP : { .text : { } { *(SEG_HEADER) } .data : { } { *(SEG_HEADER) } .bss : { } { *(SEG_HEADER) } } > mem2 } C source code: const UINT8 un8_Offset_1 __attribute__((section("BASE_TABLE"))) = 0x1A; const UINT8 un8_Offset_2 __attribute__((section("BASE_TABLE"))) = 0x2A; const UINT8 un8_Offset_3 __attribute__((section("BASE_TABLE"))) = 0x3A; const UINT8 un8_Offset_4 __attribute__((section("BASE_TABLE"))) = 0x4A; const UINT8 un8_Offset_5 __attribute__((section("BASE_TABLE"))) = 0x5A; const UINT8 un8_Offset_6 __attribute__((section("SEG_HEADER"))) = 0x6A; My intention is to place the variables of section "BASE_TABLE" at the address defined i the linker script file and the remaining variables at the "SEG_HEADER" defined in the linker script file above. But after compilation when I look in to the .hex file the different section variables are located in different hex records, located at an address of 0x00, not the one given in linker script file . Please help me in linking the linker script file to source code. Are there any command line options to link the linker script file, if any plese provide me with the info how to use the options. Thanks in advance, SureshDN.

    Read the article

  • C# - parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • GCC - How to realign stack?

    - by psihodelia
    I try to build an application which uses pthreads and __m128 SSE type. According to GCC manual, default stack alignment is 16 bytes. In order to use __m128, the requirement is the 16-byte alignment. My target CPU supports SSE. I use a GCC compiler which doesn't support runtime stack realignment (e.g. -mstackrealign). I cannot use any other GCC compiler version. My test application looks like: #include <xmmintrin.h> #include <pthread.h> void *f(void *x){ __m128 y; ... } int main(void){ pthread_t p; pthread_create(&p, NULL, f, NULL); } The application generates an exception and exits. After a simple debugging (printf "%p", &y), I found that the variable y is not 16-byte aligned. My question is: how can I realign the stack properly (16-byte) without using any GCC flags and attributes (they don't help)? Should I use GCC inline Assembler within this thread function f()?

    Read the article

  • Is it valid to use unsafe struct * as an opaque type instead of IntPtr in .NET Platform Invoke?

    - by David Jeske
    .NET Platform Invoke advocates declaring pointer types as IntPtr. For example, the following [DllImport("user32.dll")] static extern IntPtr SendMessage(IntPtr hWnd, UInt32 Msg, Int32 wParam, Int32 lParam); However, I find when interfacing with interesting native interfaces, that have many pointer types, flattening everything into IntPtr makes the code very hard to read and removes the typical typechecking that a compiler can do. I've been using a pattern where I declare an unsafe struct to be an opaque pointer type. I can store this pointer type in a managed object, and the compiler can typecheck it form me. For example: class Foo { unsafe struct FOO {}; // opaque type unsafe FOO *my_foo; class if { [DllImport("mydll")] extern static unsafe FOO* get_foo(); [DllImport("mydll")] extern static unsafe void do_something_foo(FOO *foo); } public unsafe Foo() { this.my_foo = if.get_foo(); } public unsafe do_something_foo() { if.do_something_foo(this.my_foo); } While this example may not seem different than using IntPtr, when there are several pointer types moving between managed and native code, using these opaque pointer types for typechecking is a godsend. I have not run into any trouble using this technique in practice. However, I also have not seen an examples of anyone using this technique, and I wonder why. Is there any reason that the above code is invalid in the eyes of the .NET runtime? My main question is about how the .NET GC system treats "unsafe FOO *my_foo". Is this pointer something the GC system is going to try to trace, or is it simply going to ignore it? My hope is that because the underlying type is a struct, and it's declared unsafe, that the GC would ignore it. However, I don't know for sure. Thoughts?

    Read the article

  • How to tell endianness from this output?

    - by Nick Rosencrantz
    I'm running this example program and I'm suppossed to be able to tell from the output what machine type it is. I'm certain it's from inspecting one or two values but how should I perform this inspection? /* pointers.c - Test pointers * Written 2012 by F Lundevall * Copyright abandoned. This file is in the public domain. * * To make this program work on as many systems as possible, * addresses are converted to unsigned long when printed. * The 'l' in formatting-codes %ld and %lx means a long operand. */ #include <stdio.h> #include <stdlib.h> int * ip; /* Declare a pointer to int, a.k.a. int pointer. */ char * cp; /* Pointer to char, a.k.a. char pointer. */ /* Declare fp as a pointer to function, where that function * has one parameter of type int and returns an int. * Use cdecl to get the syntax right, http://cdecl.org/ */ int ( *fp )( int ); int val1 = 111111; int val2 = 222222; int ia[ 17 ]; /* Declare an array of 17 ints, numbered 0 through 16. */ char ca[ 17 ]; /* Declare an array of 17 chars. */ int fun( int parm ) { printf( "Function fun called with parameter %d\n", parm ); return( parm + 1 ); } /* Main function. */ int main() { printf( "Message PT.01 from pointers.c: Hello, pointy World!\n" ); /* Do some assignments. */ ip = &val1; cp = &val2; /* The compiler should warn you about this. */ fp = fun; ia[ 0 ] = 11; /* First element. */ ia[ 1 ] = 17; ia[ 2 ] = 3; ia[ 16 ] = 58; /* Last element. */ ca[ 0 ] = 11; /* First element. */ ca[ 1 ] = 17; ca[ 2 ] = 3; ca[ 16 ] = 58; /* Last element. */ printf( "PT.02: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); printf( "PT.03: val2: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val2, val2, val2 ); printf( "PT.04: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.05: Dereference pointer ip and we find: %d \n", *ip ); printf( "PT.06: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.07: Dereference pointer cp and we find: %d \n", *cp ); *ip = 1234; printf( "\nPT.08: Executed *ip = 1234; \n" ); printf( "PT.09: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); printf( "PT.10: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.11: Dereference pointer ip and we find: %d \n", *ip ); printf( "PT.12: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); *cp = 1234; /* The compiler should warn you about this. */ printf( "\nPT.13: Executed *cp = 1234; \n" ); printf( "PT.14: val2: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val2, val2, val2 ); printf( "PT.15: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.16: Dereference pointer cp and we find: %d \n", *cp ); printf( "PT.17: val2: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val2, val2, val2 ); ip = ia; printf( "\nPT.18: Executed ip = ia; \n" ); printf( "PT.19: ia[0]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ia[0], ia[0], ia[0] ); printf( "PT.20: ia[1]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ia[1], ia[1], ia[1] ); printf( "PT.21: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.22: Dereference pointer ip and we find: %d \n", *ip ); ip = ip + 1; /* add 1 to pointer */ printf( "\nPT.23: Executed ip = ip + 1; \n" ); printf( "PT.24: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.25: Dereference pointer ip and we find: %d \n", *ip ); cp = ca; printf( "\nPT.26: Executed cp = ca; \n" ); printf( "PT.27: ca[0]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[0], ca[0], ca[0] ); printf( "PT.28: ca[1]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[1], ca[1], ca[1] ); printf( "PT.29: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.30: Dereference pointer cp and we find: %d \n", *cp ); cp = cp + 1; /* add 1 to pointer */ printf( "\nPT.31: Executed cp = cp + 1; \n" ); printf( "PT.32: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.33: Dereference pointer cp and we find: %d \n", *cp ); ip = ca; /* The compiler should warn you about this. */ printf( "\nPT.34: Executed ip = ca; \n" ); printf( "PT.35: ca[0]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[0], ca[0], ca[0] ); printf( "PT.36: ca[1]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[1], ca[1], ca[1] ); printf( "PT.37: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.38: Dereference pointer ip and we find: %d \n", *ip ); cp = ia; /* The compiler should warn you about this. */ printf( "\nPT.39: Executed cp = ia; \n" ); printf( "PT.40: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.41: Dereference pointer cp and we find: %d \n", *cp ); printf( "\nPT.42: fp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &fp, (long) fp, (long) fp ); printf( "PT.43: Dereference fp and see what happens.\n" ); val1 = (*fp)(42); printf( "PT.44: Executed val1 = (*fp)(42); \n" ); printf( "PT.45: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); return( 0 ); } Output Message PT.01 from pointers.c: Hello, pointy World! PT.02: val1: stored at 21e50 (hex); value is 111111 (dec), 1b207 (hex) PT.03: val2: stored at 21e54 (hex); value is 222222 (dec), 3640e (hex) PT.04: ip: stored at 21eb8 (hex); value is 138832 (dec), 21e50 (hex) PT.05: Dereference pointer ip and we find: 111111 PT.06: cp: stored at 21e6c (hex); value is 138836 (dec), 21e54 (hex) PT.07: Dereference pointer cp and we find: 0 PT.08: Executed *ip = 1234; PT.09: val1: stored at 21e50 (hex); value is 1234 (dec), 4d2 (hex) PT.10: ip: stored at 21eb8 (hex); value is 138832 (dec), 21e50 (hex) PT.11: Dereference pointer ip and we find: 1234 PT.12: val1: stored at 21e50 (hex); value is 1234 (dec), 4d2 (hex) PT.13: Executed *cp = 1234; PT.14: val2: stored at 21e54 (hex); value is -771529714 (dec), d203640e (hex) PT.15: cp: stored at 21e6c (hex); value is 138836 (dec), 21e54 (hex) PT.16: Dereference pointer cp and we find: -46 PT.17: val2: stored at 21e54 (hex); value is -771529714 (dec), d203640e (hex) PT.18: Executed ip = ia; PT.19: ia[0]: stored at 21e74 (hex); value is 11 (dec), b (hex) PT.20: ia[1]: stored at 21e78 (hex); value is 17 (dec), 11 (hex) PT.21: ip: stored at 21eb8 (hex); value is 138868 (dec), 21e74 (hex) PT.22: Dereference pointer ip and we find: 11 PT.23: Executed ip = ip + 1; PT.24: ip: stored at 21eb8 (hex); value is 138872 (dec), 21e78 (hex) PT.25: Dereference pointer ip and we find: 17 PT.26: Executed cp = ca; PT.27: ca[0]: stored at 21e58 (hex); value is 11 (dec), b (hex) PT.28: ca[1]: stored at 21e59 (hex); value is 17 (dec), 11 (hex) PT.29: cp: stored at 21e6c (hex); value is 138840 (dec), 21e58 (hex) PT.30: Dereference pointer cp and we find: 11 PT.31: Executed cp = cp + 1; PT.32: cp: stored at 21e6c (hex); value is 138841 (dec), 21e59 (hex) PT.33: Dereference pointer cp and we find: 17 PT.34: Executed ip = ca; PT.35: ca[0]: stored at 21e58 (hex); value is 11 (dec), b (hex) PT.36: ca[1]: stored at 21e59 (hex); value is 17 (dec), 11 (hex) PT.37: ip: stored at 21eb8 (hex); value is 138840 (dec), 21e58 (hex) PT.38: Dereference pointer ip and we find: 185664256 PT.39: Executed cp = ia; PT.40: cp: stored at 21e6c (hex); value is 138868 (dec), 21e74 (hex) PT.41: Dereference pointer cp and we find: 0 PT.42: fp: stored at 21e70 (hex); value is 69288 (dec), 10ea8 (hex) PT.43: Dereference fp and see what happens. Function fun called with parameter 42 PT.44: Executed val1 = (*fp)(42); PT.45: val1: stored at 21e50 (hex); value is 43 (dec), 2b (hex)

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • My timer code is failing when IAR is configured to do max optimization

    - by Vishal
    Hi, I have used timer A in MSP430 with high compiler optimization, but found that my timer code is failing when high compiler optimization used. When none optimization is used code works fine. This code is used to achieve 1 ms timer tick. timeOutCNT is increamented in interrupt. Following is the code [Code] //Disable interrupt and clear CCR0 TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0; // timer interrupt flag disabled CCTL0 = CCIE; // CCR0 interrupt enabled CCR0 = 500; TIMER_A_TACTL &= TIMER_A_TAIE; //enable timer interrupt TIMER_A_TACTL &= TIMER_A_TAIFG; //enable timer interrupt TACTL = TIMER_A_TASSEL + MC_1 + ID_3; // SMCLK, upmode timeOutCNT = 0; //timeOutCNT is increased in timer interrupt while(timeOutCNT <= 1); //delay of 1 milisecond TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0x00; // timer interrupt flag disabled [/code] Can anybody help me here to resolve this issue? Is there any other way we can use timer A so it works fine in optimization modes? Or do I have used is wrongly to achieve 1 ms interrupt? Thanks in advanced. Vishal N

    Read the article

  • passing an array of structures (containing two mpz_t numbers) to a function

    - by jerome
    Hello, I'm working on some project where I use the type mpz_t from the GMP C library. I have some problems passing an array of structures (containing mpz_ts) adress to a function : I wille try to explain my problem with some code. So here is the structure : struct mpz_t2{ mpz_t a; mpz_t b; }; typedef struct mpz_t2 *mpz_t2; void petit_test(mpz_t2 *test[]) { printf("entering petit test function\n"); for (int i=0; i < 4; i++) { gmp_printf("test[%d]->a = %Zd and test[%d]->b = %Zd\n", test[i]->a, test[i]->b); } } /* IN MAIN FUNCTION */ mpz_t2 *test = malloc(4 * sizeof(mpz_t2 *)); for (int i=0; i < 4; i++) { mpz_t2_init(&test[i]); // if I pass test[i] : compiler error mpz_set_ui(test[i].a, i); //if test[i]->a compiler error mpz_set_ui(test[i].b, i*10); //same problem gmp_printf("%Zd\n", test[i].b); //prints correct result } petit_test(test); The programm prints the expected result (in main) but after entering the petit_test function produces a segmentation fault error. I would need to edit the mpz_t2 structure array in petit_test. I tried some other ways allocating and passing the array to the function but I didn't manage to get this right. If someone has a solution to this problem, I would be very thankfull! Regards, jérôme.

    Read the article

  • What is default javac source mode (assert as identifier compilation)?

    - by waste
    According to Orcale's Java7 assert guide: source mode 1.3 (default) — the compiler accepts programs that use assert as an identifier, but issues warnings. In this mode, programs are not permitted to use the assert statement. source mode 1.4 — the compiler generates an error message if the program uses assert as an identifier. In this mode, programs are permitted to use the assert statement. I wrote such class: package mm; public class ClassTest { public static void main(String[] arg) { int assert = 1; System.out.println(assert); } } It should compile fine if Oracle's info right (1.3 is default source mode). But I got errors like this: $ javac -version javac 1.7.0_04 $ javac -d bin src/mm/* src\mm\ClassTest.java:5: error: as of release 1.4, 'assert' is a keyword, and may not be used as an identifier int assert = 1; ^ (use -source 1.3 or lower to use 'assert' as an identifier) src\mm\ClassTest.java:6: error: as of release 1.4, 'assert' is a keyword, and may not be used as an identifier System.out.println(assert); ^ (use -source 1.3 or lower to use 'assert' as an identifier) 2 errors I added manually -source 1.3 and it issued warnings but compiled fine. It seems that Oracle's information is wrong and 1.3 is not default source mode. Which one is it then?

    Read the article

  • Is there a library that can decompile a method into an Expression tree, with support for CLR 4.0?

    - by Daniel Earwicker
    Previous questions have asked if it is possible to turn compiled delegates into expression trees, for example: http://stackoverflow.com/questions/767733/converting-a-net-funct-to-a-net-expressionfunct The sane answers at the time were: It's possible, but very hard and there's no standard library solution. Use Reflector! But fortunately there are some greatly-insane/insanely-great people out there who like reverse engineering things, and they make difficult things easy for the rest of us. Clearly it is possible to decompile IL to C#, as Reflector does it, and so you could in principle instead target CLR 4.0 expression trees with support for all statement types. This is interesting because it wouldn't matter if the compiler's built-in special support for Expression<> lambdas is never extended to support building statement expression trees in the compiler. A library solution could fill the gap. We would then have a high-level starting point for writing aspect-like manipulations of code without having to mess with raw IL. As noted in the answers to the above linked question, there are some promising signs but I haven't succeeded in finding if there's been much progress since by searching. So has anyone finished this job, or got very far with it? Note: CLR 4.0 is now released. Time for another look-see.

    Read the article

  • how do I make a portable isnan/isinf function.

    - by monkeyking
    I've been using isinf,isnan functions on linux platforms which worked perfectly. But this didn't work on osx, so I decided to use std::isinf std::isnan which works on both linux and osx. But the intel compiler doesn't recognize it, and I guess its a bug in the intel compiler according to http://software.intel.com/en-us/forums/showthread.php?t=64188 So now I just want to avoid the hassle and define my own isinf,isnan implementation. Does anyone know how this could be done Thanks edit: I ended up doing this in my sourcecode for making isinf/isnan working #include <iostream> #include <cmath> #ifdef __INTEL_COMPILER #include <mathimf.h> #endif int isnan_local(double x) { #ifdef __INTEL_COMPILER return isnan(x); #else return std::isnan(x); #endif } int isinf_local(double x) { #ifdef __INTEL_COMPILER return isinf(x); #else return std::isinf(x); #endif } int myChk(double a){ std::cerr<<"val is: "<<a <<"\t"; if(isnan_local(a)) std::cerr<<"program says isnan"; if(isinf_local(a)) std::cerr<<"program says isinf"; std::cerr<<"\n"; return 0; } int main(){ double a = 0; myChk(a); myChk(log(a)); myChk(-log(a)); myChk(0/log(a)); myChk(log(a)/log(a)); return 0; }

    Read the article

  • How should I read from a buffered reader?

    - by Roman
    I have the following example of reading from a buffered reader: while ((inputLine = input.readLine()) != null) { System.out.println("I got a message from a client: " + inputLine); } The code in the loop println will be executed whenever something appears in the buffered reader (input in this case). In my case, if a client-application writes something to the socket, the code in the loop (in the server-application) will be executed. But I do not understand how it works. inputLine = input.readLine() waits until something appears in the buffered reader and when something appears there it returns true and the code in the loop is executed. But when null can be returned. There is another question. The above code was taken from a method which throws Exception and I use this code in the run method of the Thread. And when I try to put throws Exception before the run the compiler complains: overridden method does not throw exception. Without the throws exception I have another complain from the compiler: unreported exception. So, what can I do?

    Read the article

  • iPhone static library Clang/LLVM error: non_lazy_symbol_pointers

    - by Bekenn
    After several hours of experimentation, I've managed to reduce the problem to the following example (C++): extern "C" void foo(); struct test { ~test() { } }; void doTest() { test t; // 1 foo(); // 2 } This is being compiled for iOS devices in XCode 4.2, using the provided Clang compiler (Apple LLVM compiler 3.0) and the iOS 5.0 SDK. The project is configured as a Cocoa Touch Static Library, and "Enable Linking With Shared Libraries" is set to No because I'm building an AIR native extension. The function foo is defined in another external library. (In my actual project, this would be any of the C API functions defined by Adobe for use in AIR native extensions.) When attempting to compile this code, I get back the error: FATAL:incompatible feature used: section type non_lazy_symbol_pointers (must specify "-dynamic" to be used) clang: error: assembler command failed with exit code 1 (use -v to see invocation) The error goes away if I comment out either of the lines marked 1 or 2 above, or if I change the build setting "Enable Linking With Shared Libraries" to Yes. (However, if I change the build setting, then I get multiple ld warning: unexpected srelocation type 9 warnings when linking the library into the final project, and the application crashes when running on the device.) The build error also goes away if I remove the destructor from test. So: Is this a bug in Clang? Am I missing some all-important and undocumented build setting? The interaction between an externally-provided function and a struct with a destructor is very peculiar, to say the least.

    Read the article

  • int considered harmful?

    - by Chris Becke
    Working on code meant to be portable between Win32 and Win64 and Cocoa, I am really struggling to get to grips with what the @#$% the various standards committees involved over the past decades were thinking when they first came up with, and then perpetuated, the crime against humanity that is the C native typeset - char, short, int and long. On the one hand, as a old-school c++ programmer, there are few statements that were as elegant and/or as simple as for(int i=0; i<some_max; i++) but now, it seems that, in the general case, this code can never be correct. Oh sure, given a particular version of MSVC or GCC, with specific targets, the size of 'int' can be safely assumed. But, in the case of writing very generic c/c++ code that might one day be used on 16 bit hardware, or 128, or just be exposed to a particularly weirdly setup 32/64 bit compiler, how does use int in c++ code in a way that the resulting program would have predictable behavior in any and all possible c++ compilers that implemented c++ according to spec. To resolve these unpredictabilities, C99 and C++98 introduced size_t, uintptr_t, ptrdiff_t, int8_t, int16_t, int32_t, int16_t and so on. Which leaves me thinking that a raw int, anywhere in pure c++ code, should really be considered harmful, as there is some (completely c++xx conforming) compiler, thats going to produce an unexpected or incorrect result with it. (and probably be a attack vector as well)

    Read the article

  • How to load JPG file into NSBitmapImageRep?

    - by Adam
    Objective-C / Cocoa: I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code: NSString *filePath = [NSString stringWithFormat: @"%@%@",@"/Users/adam/Documents/phoneimages/", [outLabel stringValue]]; //this coming from a window control NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath]; With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070. I have tried replacing the second line of code with: NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath]; NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage]; But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage. I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both). And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension. Thanks for any help!

    Read the article

  • Conditional macro expansion

    - by Dave DeLong
    Heads up: This is a weird question. I've got some really useful macros that I like to use to simplify some logging. For example I can do Log(@"My message with arguments: %@, %@, %@", @"arg1", @"arg2", @"arg3"), and that will get expanded into a more complex method invocation that includes things like self, _cmd, __FILE__, __LINE__, etc, so that I can easily track where things are getting logged. This works great. Now I'd like to expand my macros to not only work with Objective-C methods, but general C functions. The problem is the self and _cmd portions that are in the macro expansion. These two parameters don't exist in C functions. Ideally, I'd like to be able to use this same set of macros within C functions, but I'm running into problems. When I use (for example) my Log() macro, I get compiler warnings about self and _cmd being undeclared (which makes total sense). My first thought was to do something like the following (in my macro): if (thisFunctionIsACFunction) { DoLogging(nil, nil, format, ##__VA_ARGS__); } else { DoLogging(self, _cmd, format, ##__VA_ARGS__); } This still produces compiler warnings, since the entire if() statement is substituted in place of the macro, resulting in errors with the self and _cmd keywords (even though they will never be executed during function execution). My next thought was to do something like this (in my macro): if (thisFunctionIsACFunction) { #define SELF nil #define CMD nil } else { #define SELF self #define CMD _cmd } DoLogging(SELF, CMD, format, ##__VA_ARGS__); That doesn't work, unfortunately. I get "error: '#' is not followed by a macro parameter" on my first #define. My other thought was to create a second set of macros, specifically for use in C functions. This reeks of a bad code smell, and I really don't want to do this. Is there some way I can use the same set of macros from within both Objective-C methods and C functions, and only reference self and _cmd if the macro is in an Objective-C method?

    Read the article

  • How to macro-ify ant targets?

    - by Jonas Byström
    I want to be able to have different targets doing nearly the same thing, as so: ant build <- this would be a normal (default) build ant safari <- building the safari target. The targets look like this: <target name="build" depends="javac" description="GWT compile to JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.target}"/> </java> </target> <target name="safari" depends="javac" description="GWT compile to Safari/JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.safari.target}"/> </java> </target> (Nevermind the first thought that strikes: throw out ant! That's not an option just yet.) I tried using macrodef, but got a strange error message (even though the message didn't imply it, it think it had to do with putting a target in sequential). I don't want to do ant -Dwhatever=nevermind. Any ideas?

    Read the article

  • How can get dtrace to run the traced command with non-root priviledges ?

    - by Gyom
    OS X lacks linux's strace, but it has dtrace which is supposed to be so much better. However, I miss the ability to do simple tracing on individual commands. For example, on linux I can write strace -f gcc hello.c to caputre all system calls, which gives me the list of all the filenames needed by the compiler to compile my program (the excellent memoize script is built upon this trick) I want to port memoize on the mac, so I need some kind of strace. What I actually need is the list of files gcc reads and writes into, so what I need is more of a truss. Sure enough can I say dtruss -f gcc hello.c and get somewhat the same functionality, but then the compiler is run with root priviledges, which is obviously undesirable (apart from the massive security risk, one issue is that the a.out file is now owned by root :-) I then tried dtruss -f sudo -u myusername gcc hello.c, but this feels a bit wrong, and does not work anyway (I get no a.out file at all this time, not sure why) All that long story tries to motivate my original question: how do I get dtrace to run my command with normal user priviledges, just like strace does in linux ? Edit: is seems that I'm not the only one wondering how to do this: question #1204256 is pretty much the same as mine (and has the same suboptimal sudo answer :-)

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >