Search Results

Search found 1848 results on 74 pages for 'printf'.

Page 66/74 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • need help with Java solution /newbie

    - by Racket
    Hi, I'm new to programming in general so i'm trying to be as specific as possible in this question. There's this book that i'm doing some exercises on. I managed to do more than half of what they say, but it's just one input that I have been struggling to find out. I'll write the question and thereafter my code, "Write an application that creates and prints a random phone number of the form XXX-XXX-XXXX. Include the dashes in the output. Do not let the first three digits contain an 8 or 9 (but don't be more restrictive than that), and make sure that the second set of three digits is not greater than 742. Hint: Think through the easiest way to construct the phone number. Each diigit does not have to be determined separately." OK, the highlighted sentence is what i'm looking at. Here's my code: import java.util.Random; public class PP33 { public static void main (String[] args) { Random rand = new Random(); int num1, num2, num3; num1 = rand.nextInt(900) + 100; num2 = rand.nextInt(643) + 100; num3 = rand.nextInt(9000) + 1000; System.out.println(num1+"-"+num2+"-"+num3); } } How am I suppose to do this? I'm on chapter 3 so we have not yet discussed if statements etcetera, but Aliases, String class, Packages, Import declaration, Random Class, Math Class, Formatting output (decimal- & numberFormat), Printf, Enumeration & Wrapper classes + autoboxing. So consider answer the question based only on these assumptions, please. The code doesn't have any errors. Thank you!

    Read the article

  • Hide struct definition in static library.

    - by BobMcLaury
    Hi, I need to provide a C static library to the client and need to be able to make a struct definition unavailable. On top of that I need to be able to execute code before the main at library initialization using a global variable. Here's my code: private.h #ifndef PRIVATE_H #define PRIVATE_H typedef struct TEST test; #endif private.c (this should end up in a static library) #include "private.h" #include <stdio.h> struct TEST { TEST() { printf("Execute before main and have to be unavailable to the user.\n"); } int a; // Can be modified by the user int b; // Can be modified by the user int c; // Can be modified by the user } TEST; main.c test t; int main( void ) { t.a = 0; t.b = 0; t.c = 0; return 0; } Obviously this code doesn't work... but show what I need to do... Anybody knows how to make this work? I google quite a bit but can't find an answer, any help would be greatly appreciated. TIA!

    Read the article

  • Got problem when uploading the html into the webview in iphone sdk.

    - by Monish Kumar
    Hi Guy's NSString* appendString=@""; appendString = [appendString stringByAppendingString:@"<body>"]; appendString =[appendString stringByAppendingString:@"<table background='footer.png' width='320' height='45' style='background-repeat:no-repeat'>"]; appendString =[appendString stringByAppendingString:@"<tr>"]; appendString =[appendString stringByAppendingString:@"<td align='left' width='57' height='31' style='padding: 6px 0 0 0' ><a href='/map/'><img src='details_Back.png'/></a></td>"]; appendString =[appendString stringByAppendingString:@"<td align='left' valign='middle' style='padding: 0 0 0 65px; font-family:Helvetica; font-size:21px ; font-weight:bold ; color:#FFF'>Details</td>"]; appendString =[appendString stringByAppendingString:@"</tr>"]; appendString =[appendString stringByAppendingString:@"</table>"]; appendString =[appendString stringByAppendingString:@"<br>"]; returnString = [returnString stringByReplacingOccurrencesOfString:@"<body>" withString:appendString]; printf("\n return string :%s",[returnString UTF8String]); [myWebView loadHTMLString:returnString baseURL:[NSURL URLWithString:@"http://abc.api.abcdefg.com/"]]; here in the above code the footer.png and details_back.png are the local images stored in my resource folder. Here the problem is I am gettin the background image from the server link I had passed to the webview as baseurl but the images footer.png and details_back.png which were stored in resource is not displayed. if I use the resource bundle as the baseurl then I am not displayed the background image from the server link. Can anyone please give me the suggestions to get rid of rid of this problem. thanks to all guy's, Monish.

    Read the article

  • Write to memory buffer instead of file with libjpeg?

    - by Richard Knop
    I have found this function which uses libjpeg to write to a file: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } I would actually need to write the jpeg compressed image just to memory buffer, without saving it to a file, to save time. Could somebody give me an example how to do it? I have been searching the web for a while but the documentation is very rare if any and examples are also difficult to come by.

    Read the article

  • Using an SHA1 with Microsoft CAPI

    - by Erik Jõgi
    I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits).

    Read the article

  • When does the call() method get called in a Java Executor using Callable objects?

    - by MalcomTucker
    This is some sample code from an example. What I need to know is when call() gets called on the callable? What triggers it? public class CallableExample { public static class WordLengthCallable implements Callable { private String word; public WordLengthCallable(String word) { this.word = word; } public Integer call() { return Integer.valueOf(word.length()); } } public static void main(String args[]) throws Exception { ExecutorService pool = Executors.newFixedThreadPool(3); Set<Future<Integer>> set = new HashSet<Future<Integer>>(); for (String word: args) { Callable<Integer> callable = new WordLengthCallable(word); Future<Integer> future = pool.submit(callable); //**DOES THIS CALL call()?** set.add(future); } int sum = 0; for (Future<Integer> future : set) { sum += future.get();//**OR DOES THIS CALL call()?** } System.out.printf("The sum of lengths is %s%n", sum); System.exit(sum); } }

    Read the article

  • UnicodeEncodeError: 'ascii' codec can't encode character [...]

    - by user1461135
    I have read the HOWTO on Unicode from the official docs and a full, very detailed article as well. Still I don't get it why it throws me this error. Here is what I attempt: I open an XML file that contains chars out of ASCII range (but inside allowed XML range). I do that with cfg = codecs.open(filename, encoding='utf-8, mode='r') which runs fine. Looking at the string with repr() also shows me a unicode string. Now I go ahead and read that with parseString(cfg.read().encode('utf-8'). Of course, my XML file starts with this: <?xml version="1.0" encoding="utf-8"?>. Although I suppose it is not relevant, I also defined utf-8 for my python script, but since I am not writing unicode characters directly in it, this should not apply here. Same for the following line: from __future__ import unicode_literals which also is right at the beginning. Next thing I pass the generated Object to my own class where I read tags into variables like this: xmldata.getElementsByTagName(tagName)[0].firstChild.data and assign it to a variable in my class. Now what perfectly works are those commands (obj is an instance of the class): for element in obj: print element And this command does work as well: print obj.__repr__() I defined __iter__() to just yield every variable while __repr__() uses the typical printf stuff: "%s" % self.varname Both commands print perfectly and can output the unicode character. What does not work is this: print obj And now I am stuck because this throws the dreaded UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 47: So what am I missing? What am I doing wrong? I am looking for a general solution, I always want to handle strings as unicode, just to avoid any possible errors and write a compatible program. Edit: I also defined this: def __str__(self): return self.__repr__() def __unicode__(self): return self.__repr__() From documentation I got that this

    Read the article

  • struct and rand()

    - by teoz
    I have a struct with an array of 100 int (b) and a variable of type int (a) I have a function that checks if the value of "a" is in the array and i have generated the array elements and the variable with random values. but it doesn't work can someone help me fix it? #include <stdio.h> #include <stdlib.h> #include <time.h> typedef struct { int a; int b[100]; } h; int func(h v){ int i; for (i=0;i<100;i++){ if(v.b[i]==v.a) return 1; else return 0; } } int main(int argc, char** argv) { h str; srand(time(0)); int i; for(i=0;0<100;i++){ str.b[i]=(rand() % 10) + 1; } str.a=(rand() % 10) + 1; str.a=1; printf("%d\n",func(str)); return 0; }

    Read the article

  • Linker error: wants C++ virtual base class destructor

    - by jdmuys
    Hi, I have a link error where the linker complains that my concrete class's destructor is calling its abstract superclass destructor, the code of which is missing. This is using GCC 4.2 on Mac OS X from XCode. I saw http://stackoverflow.com/questions/307352/g-undefined-reference-to-typeinfo but it's not quite the same thing. Here is the linker error message: Undefined symbols: "ConnectionPool::~ConnectionPool()", referenced from: AlwaysConnectedConnectionZPool::~AlwaysConnectedConnectionZPool()in RKConnector.o ld: symbol(s) not found collect2: ld returned 1 exit status Here is the abstract base class declaration: class ConnectionPool { public: static ConnectionPool* newPool(std::string h, short p, std::string u, std::string pw, std::string b); virtual ~ConnectionPool() =0; virtual int keepAlive() =0; virtual int disconnect() =0; virtual sql::Connection * getConnection(char *compression_scheme = NULL) =0; virtual void releaseConnection(sql::Connection * theConnection) =0; }; Here is the concrete class declaration: class AlwaysConnectedConnectionZPool: public ConnectionPool { protected: <snip data members> public: AlwaysConnectedConnectionZPool(std::string h, short p, std::string u, std::string pw, std::string b); virtual ~AlwaysConnectedConnectionZPool(); virtual int keepAlive(); // will make sure the connection doesn't time out. Call regularly virtual int disconnect(); // disconnects/destroys all connections. virtual sql::Connection * getConnection(char *compression_scheme = NULL); virtual void releaseConnection(sql::Connection * theConnection); }; Needless to say, all those members are implemented. Here is the destructor: AlwaysConnectedConnectionZPool::~AlwaysConnectedConnectionZPool() { printf("AlwaysConnectedConnectionZPool destructor call"); // nothing to destruct in fact } and also maybe the factory routine: ConnectionPool* ConnectionPool::newPool(std::string h, short p, std::string u, std::string pw, std::string b) { return new AlwaysConnectedConnectionZPool(h, p, u, pw, b); } I can fix this by artificially making my abstract base class concrete. But I'd rather do something better. Any idea? Thanks

    Read the article

  • C Programing - Return libcurl http response to string to the calling function

    - by empty set
    I have a homework and i need somehow to compare two http responses. I am writing it on C (dumb decision) and i use libcurl to make things easier. I am calling the function that uses libcurl to http request and response from another function and i want to return the http response to it. Anyway, the code below doesn't work, any ideas? #include <stdio.h> #include <curl/curl.h> #include <string.h> size_t write_data(void *ptr, size_t size, size_t nmemb, void *stream) { size_t written; written = fwrite(ptr, size, nmemb, stream); return written; } char *handle_url(void) { CURL *curl; char *fp; CURLcode res; char *url = "http://www.yahoo.com"; curl = curl_easy_init(); if (curl) { curl_easy_setopt(curl, CURLOPT_URL, url); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data); curl_easy_setopt(curl, CURLOPT_WRITEDATA, fp); res = curl_easy_perform(curl); if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); curl_easy_cleanup(curl); //printf("\n%s", fp); } return fp; } This solution C libcurl get output into a string works, but not in my case because i want to return this string to the calling function.

    Read the article

  • passing argument 1 of 'atoi' makes pointer from integer without a cast....can any body help me..

    - by somasekhar
    #include<stdio.h> #include<string.h> #include<stdlib.h> int main(){ int n; int a,b,ans[10000]; char *c,*d,*e; int i = 0; c = (char*)(malloc(20 * sizeof(char))); d = (char*)(malloc(20 * sizeof(char))); scanf("%d",&n); while(i < n){ scanf("%d",&a); scanf("%d",&b); itoa(a,c,10); itoa(b,d,10); a = atoi(strrev(c)) + atoi(strrev(d)); itoa(a,c,10); e = c; while(*e == '0')e++; ans[i] = atoi(strrev(e)); i++; } i = 0; while(i < n){ printf("%d\n",ans[i]); i++; } }

    Read the article

  • Passing parameter to pthread

    - by Andrei Ciobanu
    Hello, i have the following code: #include <stdlib.h> #include <stdio.h> #include <pthread.h> #define NUM_THREADS 100 struct thread_param { char *f1; char *f2; int x; }; void *thread_function(void *arg){ printf("%d\n", ((struct thread_param*)arg)->x); } int main(int argc, char *argvs[]){ int i, thread_cr_res = 0; pthread_t *threads; threads = malloc(100 * sizeof(*threads)); if(threads == NULL){ fprintf(stderr,"MALLOC THREADS ERROR"); return (-1); } for(i = 0; i < NUM_THREADS; i++){ struct thread_param *tp; if((tp = malloc(sizeof(*tp))) == NULL){ fprintf(stderr,"MALLOC THREAD_PARAM ERROR"); return (-1); } tp->f1 = "f1"; tp->f2 = "f2"; tp->x = i; thread_cr_res = pthread_create(&threads[i], NULL, thread_function, (void*)tp); if(thread_cr_res != 0){ fprintf(stderr,"THREAD CREATE ERROR"); return (-1); } } return (0); } What i want to achieve, is to print all the numbers from 0 to 99, from threads. Also i am experimenting a way to pass a structure as a thread input parameter. What i am finding curios, is that not all the numbers are shown, eg: ./a.out | grep 9 9 19 29 39 49 And sometimes some numbers are shown twice: ... 75 74 89 77 78 79 91 91 Can you please explain me why is this happening ? No errors are shown.

    Read the article

  • Life Scope of Temporary Variable

    - by Yan Cheng CHEOK
    #include <cstdio> #include <string> void fun(const char* c) { printf("--> %s\n", c); } std::string get() { std::string str = "Hello World"; return str; } int main() { const char *cc = get().c_str(); // cc is not valid at this point. As it is pointing to // temporary string internal buffer, and the temporary string // has already been destroyed at this point. fun(cc); // But I am surprise this call will yield valid result. // It seems that the returned temporary string is valid within // scope (...) // What my understanding is, scope means {...} // Is this valid behavior guarantee by C++ standard? Or it depends // on your compiler vendor implementations? fun(get().c_str()); getchar(); } The output is : --> --> Hello World Hello, may I know the correct behavior is guarantee by C++ standard, or it depends on your compiler vendor implementations? I have tested this under VC2008 and VC6. Works fine for both.

    Read the article

  • Using an SHA1 with Micrsoft CAPI

    - by Erik Jõgi
    Hello, I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits). Thank you.

    Read the article

  • c++ File input/output

    - by Myx
    Hi: I am trying to read from a file using fgets and sscanf. In my file, I have characters on each line of the while which I wish to put into a vector. So far, I have the following: FILE *fp; fp = fopen(filename, "r"); if(!fp) { fprintf(stderr, "Unable to open file %s\n", filename); return 0; } // Read file int line_count = 0; char buffer[1024]; while(fgets(buffer, 1023, fp)) { // Increment line counter line_count++; char *bufferp = buffer; ... while(*bufferp != '\n') { char *tmp; if(sscanf(bufferp, "%c", tmp) != 1) { fprintf(stderr, "Syntax error reading axiom on " "line %d in file %s\n", line_count, filename); return 0; } axiom.push_back(tmp); printf("put %s in axiom vector\n", axiom[axiom.size()-1]); // increment buffer pointer bufferp++; } } my axiom vector is defined as vector<char *> axiom;. When I run my program, I get a seg fault. It happens when I do the sscanf. Any suggestions on what I'm doing wrong?

    Read the article

  • Can some tell me why I am seg faulting in this simple C program?

    - by user299648
    I keep on getting seg faulted after I end my first for loop, and for the life of me I don't why. The file I'm scanning is just 18 strings in 18 lines. I thinks the problem is the way I'm mallocing the double pointer called picks, but I don't know exactly why. I'm am only trying to scanf strings that are less than 15 chars long, so I don't see the problem. Can someone please help. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_LENGTH 100 int main( int argc,char *argv[] ) { char* string = malloc( 15*sizeof(char) ); char** picks = malloc(15*sizeof(char*)); FILE* pick_file = fopen( argv[l], "r" ); int num_picks; for( num_picks=0 ; fgets( string, MAX_LENGTH, pick_file ) != NULL ; num_picks++ ) { scanf( "%s", picks+num_picks ); } //this is where i seg fault int x; for(x=0; x<num_picks;x++) printf("s\n", picks+x); }

    Read the article

  • What characters are NOT escaped with a mysqli prepared statement?

    - by barfoon
    Hey everyone, I'm trying to harden some of my PHP code and use mysqli prepared statements to better validate user input and prevent injection attacks. I switched away from mysqli_real_escape_string as it does not escape % and _. However, when I create my query as a mysqli prepared statement, the same flaw is still present. The query pulls a users salt value based on their username. I'd do something similar for passwords and other lookups. Code: $db = new sitedatalayer(); if ($stmt = $db->_conn->prepare("SELECT `salt` FROM admins WHERE `username` LIKE ? LIMIT 1")) { $stmt->bind_param('s', $username); $stmt->execute(); $stmt->bind_result($salt); while ($stmt->fetch()) { printf("%s\n", $salt); } $stmt->close(); } else return false; Am I composing the statement correctly? If I am what other characters need to be examined? What other flaws are there? What is best practice for doing these types of selects? Thanks,

    Read the article

  • Haskell lazy I/O and closing files

    - by Jesse
    I've written a small Haskell program to print the MD5 checksums of all files in the current directory (searched recursively). Basically a Haskell version of md5deep. All is fine and dandy except if the current directory has a very large number of files, in which case I get an error like: <program>: <currentFile>: openBinaryFile: resource exhausted (Too many open files) It seems Haskell's laziness is causing it not to close files, even after its corresponding line of output has been completed. The relevant code is below. The function of interest is getList. import qualified Data.ByteString.Lazy as BS main :: IO () main = putStr . unlines =<< getList "." getList :: FilePath -> IO [String] getList p = let getFileLine path = liftM (\c -> (hex $ hash $ BS.unpack c) ++ " " ++ path) (BS.readFile path) in mapM getFileLine =<< getRecursiveContents p hex :: [Word8] -> String hex = concatMap (\x -> printf "%0.2x" (toInteger x)) getRecursiveContents :: FilePath -> IO [FilePath] -- ^ Just gets the paths to all the files in the given directory. Are there any ideas on how I could solve this problem? The entire program is available here: http://haskell.pastebin.com/PAZm0Dcb

    Read the article

  • Sqlite iPhone data insertion problem

    - by Asad Khan
    Hi I have a function which basically tries to insert some data returned from a REST call. - (void)syncLocalDatabase{ NSString *file = [[NSBundle mainBundle] pathForResource:@"pickuplines" ofType:@"db"]; NSMutableString *query = [[NSMutableString alloc] initWithFormat:@""]; sqlite3 *database = NULL; char *errorMsg = NULL; if (sqlite3_open([file UTF8String], &database) == SQLITE_OK) { for(PickUpLine *pickupline in pickUpLines){ [query appendFormat:@"INSERT INTO pickuplines VALUES(%d,%d,%d,'%@','YES')", pickupline.line_id, pickupline.thumbsUps, pickupline.thumbsDowns, [pickupline.line stringByReplacingOccurrencesOfString:@"'" withString:@"`"]]; NSLog(query); int result = sqlite3_exec(database, [query UTF8String], NULL, NULL, &errorMsg); if (result!=SQLITE_OK) { printf("\n%s",errorMsg); sqlite3_free(errorMsg); } //sqlite3_step([query UTF8String]); [query setString:@""]; }//end for }//end if [query release]; sqlite3_close(database); } everything seems fine query string in log statement is also fine but the data does not gets inserted. Where as a counterpart of this function for select statement works well. Here is the counter part - (void)loadLinesFromDatabase{ NSString *file = [[NSBundle mainBundle] pathForResource:@"pickuplines" ofType:@"db"]; sqlite3 *database = NULL; if (sqlite3_open([file UTF8String], &database) == SQLITE_OK) { sqlite3_exec(database, "SELECT * FROM pickuplines", MyCallback, linesFromDatabase, NULL); } sqlite3_close(database); } I have implemented callback & it works fine. I am a little new to Sqlite can someone please point out what am I doing wrong. Thanx

    Read the article

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • gcc -finline-functions behaviour?

    - by user176168
    I'm using gcc with the -finline-functions optimization for release builds. In order to combat code bloat because I work on an embedded system I want to say don't inline particular functions. The obvious way to do this would be through function attributes ie attribute(noinline). The problem is this doesn't seem to work when I switch on the global -finline-functions optimisation which is part of the -O3 switch. It also has something to do with it being templated as a non templated version of the same function doesn't get inlined which is as expected. Has anybody any idea of how to control inlining when this global switch is on? Here's the code: #include <cstdlib> #include <iostream> using namespace std; class Base { public: template<typename _Type_> static _Type_ fooT( _Type_ x, _Type_ y ) __attribute__ (( noinline )); }; template<typename _Type_> _Type_ Base::fooT( _Type_ x, _Type_ y ) { asm(""); return x + y; } int main(int argc, char *argv[]) { int test = Base::fooT( 1, 2 ); printf( "test = %d\n", test ); system("PAUSE"); return EXIT_SUCCESS; }

    Read the article

  • Reference a GNU C DLL built in GCC against Cygwin, from C#/NET

    - by Dale Halliwell
    Here is what I want: I have a huge legacy C/C++ codebase written for POSIX, including some very POSIX specific stuff like pthreads. This can be compiled on Cygwin/GCC and run as an executable under Windows with the Cygwin DLL. What I would like to do is build the codebase itself into a Windows DLL that I can then reference from C# and write a wrapper around it to access some parts of it programatically. I have tried this approach with the very simple "hello world" example at http://www.cygwin.com/cygwin-ug-net/dll.html and it doesn't seem to work. #include <stdio.h> extern "C" __declspec(dllexport) int hello(); int hello() { printf ("Hello World!\n"); return 42; } I believe I should be able to reference a DLL built with the above code in C# using something like: [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); [UnmanagedFunctionPointer(CallingConvention.Cdecl)] private delegate int hello(); static void Main(string[] args) { var path = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "helloworld.dll"); IntPtr pDll = LoadLibrary(path); IntPtr pAddressOfFunctionToCall = GetProcAddress(pDll, "hello"); hello hello = (hello)Marshal.GetDelegateForFunctionPointer( pAddressOfFunctionToCall, typeof(hello)); int theResult = hello(); Console.WriteLine(theResult.ToString()); bool result = FreeLibrary(pDll); Console.ReadKey(); } But this approach doesn't seem to work. LoadLibrary returns null. It can find the DLL (helloworld.dll), it is just like it can't load it or find the exported function. I am sure that if I get this basic case working I can reference the rest of my codebase in this way. Any suggestions or pointers, or does anyone know if what I want is even possible? Thanks.

    Read the article

  • Conversion from C code to CudaC code I get unpredictable results

    - by Abhi
    include include include include define pi 3.14159265359 lo*lo*p-2*mu,freq=2.25*1e6,wavelength=(long double)lo/freq,dh=(long double)wavelength/ 30.0,dt=(long double)dh/(lo*1.5); (1000*dh)); (p*dh),lambdaplus2mudtbydh=(lambda+2*mu)*dt/dh,lambdadtbydh=lambda*dt/dh,dtmubydh=dt*mu/ dh; double**U,long double**V){ for(int k=0,l=0;k<=yno-1 && l<=yno;k++,l++){ U[i+1][l]+=dtbyrhodh*(X[i+1][l+1]-X[i+1][l]+Z[i+1][l]- Z[i][l]); [k+1]-Y[j][k+1]); } double**U,long double**V){ for(int k=0,l=0;k<=yno-1 && l<=yno;k++,l++){ U[i+1][k])+lambdadtbydh*(V[i+1][k+1]-V[i][k+1]); V[i][k+1])+lambdadtbydh*(U[i+1][k+1]-U[i+1][k]); U[j][l]); int main(){ clock_t start,end; long double time_taken; start=clock(); long double **X,**Y,**U,**V,**Z;int n=1; X=Make2DDoubleArray(xno+2,yno+2); Y=Make2DDoubleArray(xno+2,yno+2); Z=Make2DDoubleArray(xno+1,yno+1); U=Make2DDoubleArray(xno+2,yno+2); V=Make2DDoubleArray(xno+2,yno+2); for (n=1;n<=timesteps;n++){ } end=clock(); time_taken=(long double)(end-start)/CLOCKS_PER_SEC; printf("Time elapsed is %Lf\nGRID Size:%Lf*%Lf\nTime Steps Taken:%d\n",time_taken,(xno),floor(yno),n); return 0; }

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >