Search Results

Search found 861 results on 35 pages for 'argc'.

Page 29/35 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Multiplying matrices: error: expected primary-expression before 'struct'

    - by justin
    I am trying to write a program that is supposed to multiply matrices using threads. I am supposed to fill the matrices using random numbers in a thread. I am compiling in g++ and using PTHREADS. I have also created a struct to pass the data from my command line input to the thread so it can generate the matrix of random numbers. The sizes of the two matrices are also passed in the command line as well. I keep getting: main.cpp:7: error: expected primary-expression before 'struct' my code @ line 7 =: struct a{ int Arow; int Acol; int low; int high; }; My inpust are : Sizes of two matrices ( 4 arguments) high and low ranges in which o generate the random numbers between. Complete code: [headers] using namespace std; void *matrixACreate(struct *); void *status; int main(int argc, char * argv[]) { int Arow = atoi(argv[1]); // Matrix A int Acol = atoi(argv[2]); // WxX int Brow = atoi(argv[3]); // Matrix B int Bcol = atoi(argv[4]); // XxZ, int low = atoi(argv[5]); // Range low int high = atoi(argv[6]); struct a{ int Arow; // Matrix A int Acol; // WxX int low; // Range low int high; }; pthread_t matrixAthread; //pthread_t matrixBthread; pthread_t runner; int error, retValue; if (Acol != Brow) { cout << " This matrix cannot be multiplied. FAIL" << endl; return 0; } error = pthread_create(&matrixAthread, NULL, matrixACreate, struct *a); //error = pthread_create(&matrixAthread, NULL, matrixBCreate, sendB); retValue = pthread_join(matrixAthread, &status); //retValue = pthread_join(matrixBthread, &status); return 0; } void matrixACreate(struct * a) { struct a *data = (struct a *) malloc(sizeof(struct a)); data->Arow = Arow; data->Acol = Acol; data->low = low; data->high = high; int range = ((high - low) + 1); cout << Arow << endl<< Acol << endl; }// just trying to print to see if I am in the thread

    Read the article

  • Calling cdecl Functions That Have Different Number of Arguments

    - by KlaxSmashing
    I have functions that I wish to call based on some input. Each function has different number of arguments. In other words, if (strcmp(str, "funcA") == 0) funcA(a, b, c); else if (strcmp(str, "funcB") == 0) funcB(d); else if (strcmp(str, "funcC") == 0) funcC(f, g); This is a bit bulky and hard to maintain. Ideally, these are variadic functions (e.g., printf-style) and can use varargs. But they are not. So exploiting the cdecl calling convention, I am stuffing the stack via a struct full of parameters. I'm wondering if there's a better way to do it. Note that this is strictly for in-house (e.g., simple tools, unit tests, etc.) and will not be used for any production code that might be subjected to malicious attacks. Example: #include <stdio.h> typedef struct __params { unsigned char* a; unsigned char* b; unsigned char* c; } params; int funcA(int a, int b) { printf("a = %d, b = %d\n", a, b); return a; } int funcB(int a, int b, const char* c) { printf("a = %d, b = %d, c = %s\n", a, b, c); return b; } int funcC(int* a) { printf("a = %d\n", *a); *a *= 2; return 0; } typedef int (*f)(params); int main(int argc, char**argv) { int val; int tmp; params myParams; f myFuncA = (f)funcA; f myFuncB = (f)funcB; f myFuncC = (f)funcC; myParams.a = (unsigned char*)100; myParams.b = (unsigned char*)200; val = myFuncA(myParams); printf("val = %d\n", val); myParams.c = (unsigned char*)"This is a test"; val = myFuncB(myParams); printf("val = %d\n", val); tmp = 300; myParams.a = (unsigned char*)&tmp; val = myFuncC(myParams); printf("a = %d, val = %d\n", tmp, val); return 0; } Output: gcc -o func func.c ./func a = 100, b = 200 val = 100 a = 100, b = 200, c = This is a test val = 200 a = 300 a = 600, val = 0

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • zlib gzgets extremely slow?

    - by monkeyking
    I'm doing stuff related to parsing huge globs of textfiles, and was testing what input method to use. There is not much of a difference using c++ std::ifstreams vs c FILE, According to the documentation of zlib, it supports uncompressed files, and will read the file without decompression. I'm seeing a difference from 12 seconds using non zlib to more than 4 minutes using zlib.h This I've tested doing multiple runs, so its not a disk cache issue. Am I using zlib in some wrong way? thanks #include <zlib.h> #include <cstdio> #include <cstdlib> #include <fstream> #define LENS 1000000 size_t fg(const char *fname){ fprintf(stderr,"\t-> using fgets\n"); FILE *fp =fopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(NULL!=fgets(buffer,LENS,fp)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t is(const char *fname){ fprintf(stderr,"\t-> using ifstream\n"); std::ifstream is(fname,std::ios::in); size_t nLines =0; char *buffer = new char[LENS]; while(is. getline(buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t iz(const char *fname){ fprintf(stderr,"\t-> using zlib\n"); gzFile fp =gzopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(0!=gzgets(fp,buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } int main(int argc,char**argv){ if(atoi(argv[2])==0) fg(argv[1]); if(atoi(argv[2])==1) is(argv[1]); if(atoi(argv[2])==2) iz(argv[1]); }

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Running out of memory.. How?

    - by maxdj
    I'm attempting to write a solver for a particular puzzle. It tries to find a solution by trying every possible move one at a time until it finds a solution. The first version tried to solve it depth-first by continually trying moves until it failed, then backtracking, but this turned out to be too slow. I have rewritten it to be breadth-first using a queue structure, but I'm having problems with memory management. Here are the relevant parts: int main(int argc, char *argv[]) { ... int solved = 0; do { solved = solver(queue); } while (!solved && !pblListIsEmpty(queue)); ... } int solver(PblList *queue) { state_t *state = (state_t *) pblListPoll(queue); if (is_solution(state->pucks)) { print_solution(state); return 1; } state_t *state_cp; puck new_location; for (int p = 0; p < puck_count; p++) { for (dir i = NORTH; i <= WEST; i++) { if (!rules(state->pucks, p, i)) continue; new_location = in_dir(state->pucks, p, i); if (new_location.x != -1) { state_cp = (state_t *) malloc(sizeof(state_t)); state_cp->move.from = state->pucks[p]; state_cp->move.direction = i; state_cp->prev = state; state_cp->pucks = (puck *) malloc (puck_count * sizeof(puck)); memcpy(state_cp->pucks, state->pucks, puck_count * sizeof(puck)); /*CRASH*/ state_cp->pucks[p] = new_location; pblListPush(queue, state_cp); } } } return 0; } When I run it I get the error: ice(90175) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error The error happens around iteration 93,000. From what I can tell, the error message is from malloc failing, and the bus error is from the memcpy after it. I have a hard time believing that I'm running out of memory, since each game state is only ~400 bytes. Yet that does seem to be what's happening, seeing as the activity monitor reports that it is using 3.99GB before it crashes. I'm using http://www.mission-base.com/peter/source/ for the queue structure (it's a linked list). Clearly I'm doing something dumb. Any suggestions?

    Read the article

  • Problem separating C++ code in header, inline functions and code.

    - by YuppieNetworking
    Hello all, I have the simplest code that I want to separate in three files: Header file: class and struct declarations. No implementations at all. Inline functions file: implementation of inline methods in header. Code file: normal C++ code for more complicated implementations. When I was about to implement an operator[] method, I couldn't manage to compile it. Here is a minimal example that shows the same problem: Header (myclass.h): #ifndef _MYCLASS_H_ #define _MYCLASS_H_ class MyClass { public: MyClass(const int n); virtual ~MyClass(); double& operator[](const int i); double operator[](const int i) const; void someBigMethod(); private: double* arr; }; #endif /* _MYCLASS_H_ */ Inline functions (myclass-inl.h): #include "myclass.h" inline double& MyClass::operator[](const int i) { return arr[i]; } inline double MyClass::operator[](const int i) const { return arr[i]; } Code (myclass.cpp): #include "myclass.h" #include "myclass-inl.h" #include <iostream> inline MyClass::MyClass(const int n) { arr = new double[n]; } inline MyClass::~MyClass() { delete[] arr; } void MyClass::someBigMethod() { std::cout << "Hello big method that is not inlined" << std::endl; } And finally, a main to test it all: #include "myclass.h" #include <iostream> using namespace std; int main(int argc, char *argv[]) { MyClass m(123); double x = m[1]; m[1] = 1234; cout << "m[1]=" << m[1] << endl; x = x + 1; return 0; } void nothing() { cout << "hello world" << endl; } When I compile it, it says: main.cpp:(.text+0x1b): undefined reference to 'MyClass::MyClass(int)' main.cpp:(.text+0x2f): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x49): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x65): undefined reference to 'MyClass::operator[](int)' However, when I move the main method to the MyClass.cpp file, it works. Could you guys help me spot the problem? Thank you.

    Read the article

  • Making Global Struct in C++ Program

    - by mosg
    Hello world! I am trying to make global structure, which will be seen from any part of the source code. I need it for my big Qt project, where some global variables needed. Here it is: 3 files (global.h, dialog.h & main.cpp). For compilation I use Visual Studio (Visual C++). global.h #ifndef GLOBAL_H_ #define GLOBAL_H_ typedef struct TNumber { int g_nNumber; } TNum; TNum Num; #endif dialog.h #ifndef DIALOG_H_ #define DIALOG_H_ #include <iostream> #include "global.h" using namespace std; class ClassB { public: ClassB() {}; void showNumber() { Num.g_nNumber = 82; cout << "[ClassB][Change Number]: " << Num.g_nNumber << endl; } }; #endif and main.cpp #include <iostream> #include "global.h" #include "dialog.h" using namespace std; class ClassA { public: ClassA() { cout << "Hello from class A!\n"; }; void showNumber() { cout << "[ClassA]: " << Num.g_nNumber << endl; } }; int main(int argc, char **argv) { ClassA ca; ClassB cb; ca.showNumber(); cb.showNumber(); ca.showNumber(); cout << "Exit.\n"; return 0; } When I`m trying to build this little application, compilation works fine, but the linker gives me back an error: 1>dialog.obj : error LNK2005: "struct TNumber Num" (?Num@@3UTNumber@@A) already defined in main.obj Is there exists any solution? Thanks.

    Read the article

  • c++ compile error

    - by Niranjan
    Hi, I am trying to develop abstract design pattern code for one of my project as below.. But, I am not able to compile the code ..giving some compile errors(like "conversion from 'ProductA1 *' to 'ProductA *' exists, but is inaccessible" ).. Can any one please help me out in this... #include "stdafx.h" #include <iostream> using namespace std; class ProductA { public: virtual void Operation1()=0; virtual void Operation2()=0; }; class ProductA1 : ProductA { public: virtual void Operation1() {cout<<"PD ProductA1 Operation1"<<endl; } virtual void Operation2() {cout<<"PD ProductA1 Operation2"<<endl; } }; class ProductA2 : ProductA { public: virtual void Operation1() {cout<<"DT ProductA2 Operation1"<<endl; } virtual void Operation2() {cout<<"DT ProductA2 Operation2"<<endl; } }; //------------------------------------------------------------- class ProductB { public: virtual void Operation3()=0; virtual void Operation4()=0; }; class ProductB1 : ProductB { public: void Operation3() { cout<<"PD ProductB1 Operation3"<<endl; } void Operation4() { cout<<"PD ProductB1 Operation4"<<endl; } }; class ProductB2 : ProductB { public: void Operation3() { cout<<"DT ProductB2 Operation3"<<endl; } void Operation4() { cout<<"DT ProductB2 Operation4"<<endl; } }; //--------------- abstrct factory --------------------------- class Factory { public: virtual ProductA* CreateA () =0; virtual ProductB* CreateB ()=0; }; class Factory1 : Factory { public: ProductA* CreateA () { return new ProductA1(); } ProductB* CreateB () { return new ProductB1(); } }; class Factory2 : Factory { public: ProductA* CreateA () { return new ProductA2(); } ProductB* CreateB () { return new ProductB2(); } }; //--------------------- client -------------------------------- int _tmain(int argc, _TCHAR* argv[]) { Factory* pf = new Factory1(); ProductA *pa = pf->CreateA(); pa->Operation1(); pa->Operation2(); ProductB *pb = pf->CreateB(); pb->Operation3(); pb->Operation4(); return 0; }

    Read the article

  • Function returning MYSQL_ROW

    - by Gabe
    I'm working on a system using lots of MySQL queries and I'm running into some memory problems I'm pretty sure have to do with me not handling pointers right... Basically, I've got something like this: MYSQL_ROW function1() { string query="SELECT * FROM table limit 1;"; MYSQL_ROW return_row; mysql_init(&connection); // "connection" is a global variable if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&connection); else{ resp = mysql_store_result(&connection); //"resp" is also global if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error: " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And function2(): MYSQL_ROW function2(MYSQL_ROW row) { string query = "select * from table2 where code = '" + string(row[2]) + "'"; MYSQL_ROW retorno; mysql_init(&connection); if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&conexao); else{ // My "debugging" shows me at this point `row[2]` is already fubar resp = mysql_store_result(&connection); if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error : " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And main() is an infinite loop basically like this: int main( int argc, char* args[] ){ MYSQL_ROW row = NULL; while (1) { row = function1(); if(row != NULL) function2(row); } } (variable and function names have been generalized to protect the innocent) But after the 3rd or 4th call to function2, that only uses row for reading, row starts losing its value coming to a segfault error... Anyone's got any ideas why? I'm not sure the amount of global variables in this code is any good, but I didn't design it and only got until tomorrow to fix and finish it, so workarounds are welcome! Thanks!

    Read the article

  • OpenCv Error ( unhandeld exeption) after execute

    - by hamza
    Hi , i m using VS2010 i m trting to make a gray image more bright , the code did compile normaly but no change in the seconde picture , and an error message ( undhadled exeception .. .. ) showed up after that the execute is done showed up here is a peace of my code : int main(int argc, _TCHAR* argv[]) { IplImage *img = cvLoadImage("mra.jpg"); if (!img) { printf("Error: Couldn't open the image file.\n"); return 1; } //IplImage* new_image = getlargersize(img); double Min , Max ; Min = Max = 0 ; Max_Min (img , &Min , &Max); cout<<"the max value in the picture is :"<<Min<<" and the minimum value is :"<<Max<<endl ; IplImage* img2 = eclaircir(Min ,Max ,img); cvNamedWindow("Image:", CV_WINDOW_AUTOSIZE); cvNamedWindow("Image2:", CV_WINDOW_AUTOSIZE); cvShowImage("Image2:", img2); cvShowImage("Image:", img); cvWaitKey(0); cvDestroyWindow("Image2:"); cvDestroyWindow("Image:"); cvReleaseImage(&img2); cvReleaseImage(&img); return 0; } void Max_Min(IplImage* temp , double *min , double *max ){ CvScalar pix ; for (int i = 0 ; i < temp->height ; i++){ for (int j = 0 ; j < temp->width ; j++){ pix = cvGet2D(temp , i , j); if ( pix.val[0] >= *max ){ *max = pix.val[0]; } if ( pix.val[0] <= *min){ *min = pix.val[0]; } } } } IplImage* eclaircir (double min , double max , IplImage* image){ double temp = max - min ; CvScalar pix ; for (int i = 0 ; i < image->height ; i++){ for (int j = 0 ; j < image->width ; j++){ pix = cvGet2D(image , i , j); pix.val[0] = ( pix.val[0] - min)*255 ; pix.val[0] = pix.val[0]/temp ; cvSet2D(image , i , j , pix ); } } return image ; } thanks

    Read the article

  • Are Objective-C initializers allowed to share the same name?

    - by NattKatt
    I'm running into an odd issue in Objective-C when I have two classes using initializers of the same name, but differently-typed arguments. For example, let's say I create classes A and B: A.h: #import <Cocoa/Cocoa.h> @interface A : NSObject { } - (id)initWithNum:(float)theNum; @end A.m: #import "A.h" @implementation A - (id)initWithNum:(float)theNum { self = [super init]; if (self != nil) { NSLog(@"A: %f", theNum); } return self; } @end B.h: #import <Cocoa/Cocoa.h> @interface B : NSObject { } - (id)initWithNum:(int)theNum; @end B.m: #import "B.h" @implementation B - (id)initWithNum:(int)theNum { self = [super init]; if (self != nil) { NSLog(@"B: %d", theNum); } return self; } @end main.m: #import <Foundation/Foundation.h> #import "A.h" #import "B.h" int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; A *a = [[A alloc] initWithNum:20.0f]; B *b = [[B alloc] initWithNum:10]; [a release]; [b release]; [pool drain]; return 0; } When I run this, I get the following output: 2010-04-26 20:44:06.820 FnTest[14617:a0f] A: 20.000000 2010-04-26 20:44:06.823 FnTest[14617:a0f] B: 1 If I reverse the order of the imports so it imports B.h first, I get: 2010-04-26 20:45:03.034 FnTest[14635:a0f] A: 0.000000 2010-04-26 20:45:03.038 FnTest[14635:a0f] B: 10 For some reason, it seems like it's using the data type defined in whichever @interface gets included first for both classes. I did some stepping through the debugger and found that the isa pointer for both a and b objects ends up the same. I also found out that if I no longer make the alloc and init calls inline, both initializations seem to work properly, e.g.: A *a = [A alloc]; [a initWithNum:20.0f]; If I use this convention when I create both a and b, I get the right output and the isa pointers seem to be different for each object. Am I doing something wrong? I would have thought multiple classes could have the same initializer names, but perhaps that is not the case.

    Read the article

  • Daemonize() issues on Debian

    - by djTeller
    Hi, I'm currently writing a multi-process client and a multi-treaded server for some project i have. The server is a Daemon. In order to accomplish that, i'm using the following daemonize() code: static void daemonize(void) { pid_t pid, sid; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* At this point we are executing as the child process */ /* Change the file mode mask */ umask(0); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. This prevents the current directory from being locked; hence not being able to remove it. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } /* Redirect standard files to /dev/null */ freopen( "/dev/null", "r", stdin); freopen( "/dev/null", "w", stdout); freopen( "/dev/null", "w", stderr); } int main( int argc, char *argv[] ) { daemonize(); /* Now we are a daemon -- do the work for which we were paid */ return 0; } I have a strange side effect when testing the server on Debian (Ubuntu). The accept() function always fail to accept connections, the pid returned is -1 I have no idea what causing this, since in RedHat & CentOS it works well. When i remove the call to daemonize(), everything works well on Debian, when i add it back, same accept() error reproduce. I've been monitring the /proc//fd, everything looks good. Something in the daemonize() and the Debian release just doesn't seem to work. (Debian GNU/Linux 5.0, Linux 2.6.26-2-286 #1 SMP) Any idea what causing this? Thank you

    Read the article

  • No Matching Function Error for inserting into a list in c++

    - by Josh Curren
    I am getting an error when I try to insert an item into a list (in C++). The error is that there is no matching function for call to the insert(). I also tried push_front() but got the same error. Here is the error message: main.cpp:38: error: no matching function for call to ‘std::list<Salesperson, std::allocator<Salesperson> >::insert(Salesperson&)’ /usr/lib/gcc/i686-pc-cygwin/4.3.4/include/c++/bits/list.tcc:99: note: candidates are: std::_List_iterator<_Tp> std::list<_Tp, _Alloc>::insert(std::_List_iterator<_Tp>, const _Tp&) [with _Tp = Salesperson, _Alloc = std::allocator<Salesperson>] /usr/lib/gcc/i686-pc-cygwin/4.3.4/include/c++/bits/stl_list.h:961: note: void std::list<_Tp, _Alloc>::insert(std::_List_iterator<_Tp>, size_t, const _Tp&) [with _Tp = Salesperson, _Alloc = std::allocator<Salesperson>] Here is the code: #include <stdlib.h> #include <iostream> #include <fstream> #include <string> #include <list> #include "Salesperson.h" #include "Salesperson.cpp" #include "OrderedList.h" #include "OrderedList.cpp" using namespace std; int main(int argc, char** argv) { cout << "\n------------ Asn 8 - Sales Report ------------" << endl; list<Salesperson> s; int id; string fName, lName; int numOfSales; string year; std::ifstream input("Sales.txt"); while( !std::getline(input, year, ',').eof() ) { input >> id; input >> lName; input >> fName; input >> numOfSales; Salesperson sp = Salesperson( id, fName, lName ); s.insert( sp ); //THIS IS LINE 38 ************************** for( int i = 0; i < numOfSales; i++ ) { double sale; input >> sale; sp.sales.insert( sale ); } } cout << endl; return (EXIT_SUCCESS); }

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • Illegal Instruction When Programming C++ on Linux

    - by remagen
    Heyo, My program, which does exactly the same thing every time it runs (moves a point sprite into the distance) will randomly fail with the text on the terminal 'Illegal Instruction'. My googling has found people encountering this when writing assembly which makes sense because assembly throws those kinds of errors. But why would g++ be generating an illegal instruction like this? It's not like I'm compiling for Windows then running on Linux (which even then, as long as both are on x86 shouldn't AFAIK cause an Illegal Instruction). I'll post the main file below. I can't reliably reproduce the error. Although, if I make random changes (add a space here, change a constant there) that force a recompile I can get a binary which will fail with Illegal Instruction every time it is run, until I try setting a break point, which makes the illegal instruction 'dissapear'. :( #include <stdio.h> #include <stdlib.h> #include <GL/gl.h> #include <GL/glu.h> #include <SDL/SDL.h> #include "Screen.h" //Simple SDL wrapper #include "Textures.h" //Simple OpenGL texture wrapper #include "PointSprites.h" //Simple point sprites wrapper double counter = 0; /* Here goes our drawing code */ int drawGLScene() { /* These are to calculate our fps */ static GLint T0 = 0; static GLint Frames = 0; /* Move Left 1.5 Units And Into The Screen 6.0 */ glLoadIdentity(); glTranslatef(0.0f, 0.0f, -6); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); glEnable(GL_POINT_SPRITE_ARB); glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE); glBegin( GL_POINTS ); /* Drawing Using Triangles */ glVertex3d(0.0,0.0, 0); glVertex3d(1.0,0.0, 0); glVertex3d(1.0,1.0, counter); glVertex3d(0.0,1.0, 0); glEnd( ); /* Finished Drawing The Triangle */ /* Move Right 3 Units */ /* Draw it to the screen */ SDL_GL_SwapBuffers( ); /* Gather our frames per second */ Frames++; { GLint t = SDL_GetTicks(); if (t - T0 >= 50) { GLfloat seconds = (t - T0) / 1000.0; GLfloat fps = Frames / seconds; printf("%d frames in %g seconds = %g FPS\n", Frames, seconds, fps); T0 = t; Frames = 0; counter -= .1; } } return 1; } GLuint objectID; int main( int argc, char **argv ) { Screen screen; screen.init(); screen.resize(800,600); LoadBMP("./dist/Debug/GNU-Linux-x86/particle.bmp"); InitPointSprites(); while(true){drawGLScene();} }

    Read the article

  • Undefined template methods trick ?

    - by Matthieu M.
    A colleague of mine told me about a little piece of design he has used with his team that sent my mind boiling. It's a kind of traits class that they can specialize in an extremely decoupled way. I've had a hard time understanding how it could possibly work, and I am still unsure of the idea I have, so I thought I would ask for help here. We are talking g++ here, specifically the versions 3.4.2 and 4.3.2 (it seems to work with both). The idea is quite simple: 1- Define the interface // interface.h template <class T> struct Interface { void foo(); // the method is not implemented, it could not work if it was }; // // I do not think it is necessary // but they prefer free-standing methods with templates // because of the automatic argument deduction // template <class T> void foo(Interface<T>& interface) { interface.foo(); } 2- Define a class, and in the source file specialize the interface for this class (defining its methods) // special.h class Special {}; // special.cpp #include "interface.h" #include "special.h" // // Note that this specialization is not visible outside of this translation unit // template <> struct Interface<Special> { void foo() { std::cout << "Special" << std::endl; } }; 3- To use, it's simple too: // main.cpp #include "interface.h" class Special; // yes, it only costs a forward declaration // which helps much in term of dependencies int main(int argc, char* argv[]) { Interface<Special> special; foo(special); return 0; }; It's an undefined symbol if no translation unit defined a specialization of Interface for Special. Now, I would have thought this would require the export keyword, which to my knowledge has never been implemented in g++ (and only implemented once in a C++ compiler, with its authors advising anyone not to, given the time and effort it took them). I suspect it's got something to do with the linker resolving the templates methods... Do you have ever met anything like this before ? Does it conform to the standard or do you think it's a fortunate coincidence it works ? I must admit I am quite puzzled by the construct...

    Read the article

  • Messing with the stack in assembly and c++

    - by user246100
    I want to do the following: I have a function that is not mine (it really doesn't matter here but just to say that I don't have control over it) and that I want to patch so that it calls a function of mine, preserving the arguments list (jumping is not an option). What I'm trying to do is, to put the stack pointer as it was before that function is called and then call mine (like going back and do again the same thing but with a different function). This doesn't work straight because the stack becomes messed up. I believe that when I do the call it replaces the return address. So, I did a step to preserve the return address saving it in a globally variable and it works but this is not ok because I want it to resist to recursitivy and you know what I mean. Anyway, i'm a newbie in assembly so that's why I'm here. Please, don't tell me about already made software to do this because I want to make things my way. Of course, this code has to be compiler and optimization independent. My code (If it is bigger than what is acceptable please tell me how to post it): // A function that is not mine but to which I have access and want to patch so that it calls a function of mine with its original arguments void real(int a,int b,int c,int d) { } // A function that I want to be called, receiving the original arguments void receiver(int a,int b,int c,int d) { printf("Arguments %d %d %d %d\n",a,b,c,d); } long helper; // A patch to apply in the "real" function and on which I will call "receiver" with the same arguments that "real" received. __declspec( naked ) void patch() { _asm { // This first two instructions save the return address in a global variable // If I don't save and restore, the program won't work correctly. // I want to do this without having to use a global variable mov eax, [ebp+4] mov helper,eax push ebp mov ebp, esp // Make that the stack becomes as it were before the real function was called add esp, 8 // Calls our receiver call receiver mov esp, ebp pop ebp // Restores the return address previously saved mov eax, helper mov [ebp+4],eax ret } } int _tmain(int argc, _TCHAR* argv[]) { FlushInstructionCache(GetCurrentProcess(),&real,5); DWORD oldProtection; VirtualProtect(&real,5,PAGE_EXECUTE_READWRITE,&oldProtection); // Patching the real function to go to my patch ((unsigned char*)real)[0] = 0xE9; *((long*)((long)(real) + sizeof(unsigned char))) = (char*)patch - (char*)real - 5; // calling real function (I'm just calling it with inline assembly because otherwise it seems to works as if it were un patched // that is strange but irrelevant for this _asm { push 666 push 1337 push 69 push 100 call real add esp, 16 } return 0; }

    Read the article

  • Counting number of searches

    - by shinjuo
    I am trying to figure out how to get the total number of tests each search makes in this algorithm. I am not sure how I can pass that information back from this algorithm though. I need to count how many times while runs and then pass that number back into an array to be added together and determine the average number of test. main.c #include <stdio.h> #include <stdlib.h> #include <time.h> #include <stdbool.h> #include "percentage.h" #include "sequentialSearch.h" #define searchAmount 100 int main(int argc, char *argv[]) { int numbers[100]; int searches[searchAmount]; int i; int where; int searchSuccess; int searchUnsuccess; int percent; srand(time(NULL)); for (i = 0; i < 100; i++){ numbers[i] = rand() % 200; } for (i = 0; i < searchAmount; i++){ searches[i] = rand() % 200; } searchUnsuccess = 0; searchSuccess = 0; for(i = 0; i < searchAmount; i++){ if(seqSearch(numbers, 100, searches[i], &where)){ searchSuccess++; }else{ searchUnsuccess++; } } percent = percentRate(searchSuccess, searchAmount); printf("Total number of searches: %d\n", searchAmount); printf("Total successful searches: %d\n", searchSuccess); printf("Success Rate: %d%%\n", percent); system("PAUSE"); return 0; } sequentialSearch.h bool seqSearch (int list[], int last, int target, int* locn){ int looker; looker = 0; while(looker < last && target != list[looker]){ looker++; } *locn = looker; return(target == list[looker]); }

    Read the article

  • Perfect Forwarding to async lambda

    - by Alexander Kondratskiy
    I have a function template, where I want to do perfect forwarding into a lambda that I run on another thread. Here is a minimal test case which you can directly compile: #include <thread> #include <future> #include <utility> #include <iostream> #include <vector> /** * Function template that does perfect forwarding to a lambda inside an * async call (or at least tries to). I want both instantiations of the * function to work (one for lvalue references T&, and rvalue reference T&&). * However, I cannot get the code to compile when calling it with an lvalue. * See main() below. */ template <typename T> std::string accessValueAsync(T&& obj) { std::future<std::string> fut = std::async(std::launch::async, [](T&& vec) mutable { return vec[0]; }, std::forward<T>(obj)); return fut.get(); } int main(int argc, char const *argv[]) { std::vector<std::string> lvalue{"Testing"}; // calling with what I assume is an lvalue reference does NOT compile std::cout << accessValueAsync(lvalue) << std::endl; // calling with rvalue reference compiles std::cout << accessValueAsync(std::move(lvalue)) << std::endl; // I want both to compile. return 0; } For the non-compiling case, here is the last line of the error message which is intelligible: main.cpp|13 col 29| note: no known conversion for argument 1 from ‘std::vector<std::basic_string<char> >’ to ‘std::vector<std::basic_string<char> >&’ I have a feeling it may have something to do with how T&& is deduced, but I can't pinpoint the exact point of failure and fix it. Any suggestions? Thank you! EDIT: I am using gcc 4.7.0 just in case this could be a compiler issue (probably not)

    Read the article

  • Tokenizing a string with variable whitespace

    - by Ron Holcomb
    I've read through a few threads detailing how to tokenize strings, but I'm apparently too thick to adapt their suggestions and solutions into my program. What I'm attempting to do is tokenize each line from a large (5k+) line file into two strings. Here's a sample of the lines: 0 -0.11639404 9.0702948e-05 0.00012207031 0.0001814059 0.051849365 0.00027210884 0.062103271 0.00036281179 0.034423828 0.00045351474 0.035125732 The difference I'm finding between my lines and the other sample input from other threads is that I have a variable amount of whitespace between the parts that I want to tokenize. Anyways, here's my attempt at tokenizing: #include <iostream> #include <iomanip> #include <fstream> #include <string> using namespace std; int main(int argc, char *argv[]) { ifstream input; ofstream output; string temp2; string temp3; input.open(argv[1]); output.open(argv[2]); if (input.is_open()) { while (!input.eof()) { getline(input, temp2, ' '); while (!isspace(temp2[0])) getline(input, temp2, ' '); getline (input, temp3, '\n'); } input.close(); cout << temp2 << endl; cout << temp3 << endl; return 0; } I've clipped it some, since the troublesome bits are here. The issue that I'm having is that temp2 never seems to catch a value. Ideally, it should get populated with the first column of numbers, but it doesn't. Instead, it is blank, and temp3 is populated with the entire line. Unfortunately, in my course we haven't learned about vectors, so I'm not quite sure how to implement them in the other solutions for this I've seen, and I'd like to not just copy-paste code for assignments to get things work without actually understanding it. So, what's the extremely obvious/already been answered/simple solution I'm missing? I'd like to stick to standard libraries that g++ uses if at all possible.

    Read the article

  • What is user gcc's purpose in requesting code possibly like this?

    - by James Morris
    In the question between syntax, are there any equal function the user gcc is requesting only what I can imagine to be the following code: #include <stdio.h> #include <string.h> /* estimated magic values */ #define MAXFUNCS 8 #define MAXFUNCLEN 3 int the_mainp_compare_func(char** mainp) { char mainp0[MAXFUNCS][MAXFUNCLEN] = { 0 }; char mainp1[MAXFUNCS][MAXFUNCLEN] = { 0 }; char* psrc, *pdst; int i = 0; int func = 0; psrc = mainp[0]; printf("scanning mainp[0] for functions...\n"); while(*psrc) { if (*psrc == '\0') break; else if (*psrc == ',') ++psrc; else { mainp0[func][0] = *psrc++; if (*psrc == ',') { mainp0[func][1] = '\0'; psrc++; } else if (*psrc !='\0') { mainp0[func][1] = *psrc++; mainp0[func][2] = '\0'; } printf("function: '%s'\n", mainp0[func]); } ++func; } printf("\nscanning mainp[1] for functions...\n"); psrc = mainp[1]; func = 0; while(*psrc) { if (*psrc == '\0') break; else if (*psrc == ',') ++psrc; else { mainp1[func][0] = *psrc++; if (*psrc == ',') { mainp1[func][1] = '\0'; psrc++; } else if (*psrc !='\0') { mainp1[func][1] = *psrc++; mainp1[func][2] = '\0'; } printf("function: '%s'\n", mainp1[func]); } ++func; } printf("\ncomparing functions in '%s' with those in '%s'\n", mainp[0], mainp[1] ); int func2; func = 0; while (*mainp0[func] != '\0') { func2 = 0; while(*mainp1[func2] != '\0') { printf("comparing %s with %s\n", mainp0[func], mainp1[func2]); if (strcmp(mainp0[func], mainp1[func2++]) == 0) return 1; /* not sure what to return here */ } ++func; } /* no matches == failure */ return -1; /* not sure what to return on failure */ } int main(int argc, char** argv) { char* mainp[] = { "P,-Q,Q,-R", "R,A,P,B,F" }; if (the_mainp_compare_func(mainp) == 1) printf("a match was found, but I don't know what to do with it!\n"); else printf("no match found, and I'm none the wiser!\n"); return 0; } My question is, what is it's purpose?

    Read the article

  • Specify a base classes template parameters while instantiating a derived class?

    - by DaClown
    Hi, I have no idea if the title makes any sense but I can't find the right words to descibe my "problem" in one line. Anyway, here is my problem. There is an interface for a search: template <typename InputType, typename ResultType> class Search { public: virtual void search (InputType) = 0; virtual void getResult(ResultType&) = 0; }; and several derived classes like: template <typename InputType, typename ResultType> class XMLSearch : public Search<InputType, ResultType> { public: void search (InputType) { ... }; void getResult(ResultType&) { ... }; }; The derived classes shall be used in the source code later on. I would like to hold a simple pointer to a Search without specifying the template parameters, then assign a new XMLSearch and thereby define the template parameters of Search and XMLSearch Search *s = new XMLSearch<int, int>(); I found a way that works syntactically like what I'm trying to do, but it seems a bit odd to really use it: template <typename T> class Derived; class Base { public: template <typename T> bool GetValue(T &value) { Derived<T> *castedThis=dynamic_cast<Derived<T>* >(this); if(castedThis) return castedThis->GetValue(value); return false; } virtual void Dummy() {} }; template <typename T> class Derived : public Base { public: Derived<T>() { mValue=17; } bool GetValue(T &value) { value=mValue; return true; } T mValue; }; int main(int argc, char* argv[]) { Base *v=new Derived<int>; int i=0; if(!v->GetValue(i)) std::cout<<"Wrong type int."<<std::endl; float f=0.0; if(!v->GetValue(f)) std::cout<<"Wrong type float."<<std::endl; std::cout<<i<<std::endl<<f; char c; std::cin>>c; return 0; } Is there a better way to accomplish this?

    Read the article

  • get return value from 2 threads in C

    - by polslinux
    #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <stdint.h> #include <inttypes.h> typedef struct tmp_num{ int tmp_1; int tmp_2; }t_num; t_num t_nums; void *num_mezzo_1(void *num_orig); void *num_mezzo_2(void *num_orig); int main(int argc, char *argv[]){ pthread_t thread1, thread2; int tmp=0,rc1,rc2,num; num=atoi(argv[1]); if(num <= 3){ printf("Questo è un numero primo: %d\n", num); exit(0); } if( (rc1=pthread_create( &thread1, NULL, &num_mezzo_1, (void *)&num)) ){ printf("Creazione del thread fallita: %d\n", rc1); exit(1); } if( (rc2=pthread_create( &thread2, NULL, &num_mezzo_2, (void *)&num)) ){ printf("Creazione del thread fallita: %d\n", rc2); exit(1); } t_nums.tmp_1 = 0; t_nums.tmp_2 = 0; pthread_join(thread1, (void **)(&t_nums.tmp_1)); pthread_join(thread2, (void **)(&t_nums.tmp_2)); tmp=t_nums.tmp_1+t_nums.tmp_2; printf("%d %d %d\n", tmp, t_nums.tmp_1, t_nums.tmp_2); if(tmp>2){ printf("Questo NON è un numero primo: %d\n", num); } else{ printf("Questo è un numero primo: %d\n", num); } exit(0); } void *num_mezzo_1(void *num_orig){ int cont_1; int *n_orig=(int *)num_orig; t_nums.tmp_1 = 0; for(cont_1=1; cont_1<=(*n_orig/2); cont_1++){ if((*n_orig % cont_1) == 0){ (t_nums.tmp_1)++; } } pthread_exit((void *)(&t_nums.tmp_1)); return NULL; } void *num_mezzo_2(void *num_orig){ int cont_2; int *n_orig=(int *)num_orig; t_nums.tmp_2 = 0; for(cont_2=((*n_orig/2)+1); cont_2<=*n_orig; cont_2++){ if((*n_orig % cont_2) == 0){ (t_nums.tmp_2)++; } } pthread_exit((void *)(&t_nums.tmp_2)); return NULL; } How this program works: i have to input a number and this program will calculate if it is a prime number or not (i know that it is a bad algorithm but i only need to learn pthread). The problem is that the returned values are too much big.For example if i write "12" the value of tmp tmp_1 tmp_2 into the main are 12590412 6295204 6295208.Why i got those numbers??

    Read the article

  • C++ stream as a parameter when overloading operator<<

    - by TheOm3ga
    I'm trying to write my own logging class and use it as a stream: logger L; L << "whatever" << std::endl; This is the code I started with: #include <iostream> using namespace std; class logger{ public: template <typename T> friend logger& operator <<(logger& log, const T& value); }; template <typename T> logger& operator <<(logger& log, T const & value) { // Here I'd output the values to a file and stdout, etc. cout << value; return log; } int main(int argc, char *argv[]) { logger L; L << "hello" << '\n' ; // This works L << "bye" << "alo" << endl; // This doesn't work return 0; } But I was getting an error when trying to compile, saying that there was no definition for operator<<: pruebaLog.cpp:31: error: no match for ‘operator<<’ in ‘operator<< [with T = char [4]](((logger&)((logger*)operator<< [with T = char [4]](((logger&)(& L)), ((const char (&)[4])"bye")))), ((const char (&)[4])"alo")) << std::endl’ So, I've been trying to overload operator<< to accept this kind of streams, but it's driving me mad. I don't know how to do it. I've been loking at, for instance, the definition of std::endl at the ostream header file and written a function with this header: logger& operator <<(logger& log, const basic_ostream<char,char_traits<char> >& (*s)(basic_ostream<char,char_traits<char> >&)) But no luck. I've tried the same using templates instead of directly using char, and also tried simply using "const ostream& os", and nothing. Another thing that bugs me is that, in the error output, the first argument for operator<< changes, sometimes it's a reference to a pointer, sometimes looks like a double reference...

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >