Search Results

Search found 2886 results on 116 pages for 'std'.

Page 93/116 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • C++ Numerical truncation error

    - by Andrew
    Hello everyone, sorry if dumb but could not find an answer. #include <iostream> using namespace std; int main() { double a(0); double b(0.001); cout << a - 0.0 << endl; for (;a<1.0;a+=b); cout << a - 1.0 << endl; for (;a<10.0;a+=b); cout << a - 10.0 << endl; cout << a - 10.0-b << endl; return 0; } Output: 0 6.66134e-16 0.001 -1.03583e-13 Tried compiling it with MSVC9, MSVC10, Borland C++ 2010. All of them arrive in the end to the error of about 1e-13. Is it normal to have such a significant error accumulation over only a 1000, 10000 increments?

    Read the article

  • converting a timestring to a duration

    - by radman
    Hi, At the moment I am trying to read in a timestring formatted and create a duration from that. I am currently trying to use the boost date_time time_duration class to read and store the value. boost date_time provides a method time_duration duration_from_string(std::string) that allows a time_duration to be created from a time string and it accepts strings formatted appropriately ("[-]h[h][:mm][:ss][.fff]".). Now this method works fine if you use a correctly formatted time string. However if you submit something invalid like "ham_sandwich" or "100" then you will still be returned a time_duration that is not valid. Specifically if you try to pass it to a standard output stream then an assertion will occur. My question is: Does anyone know how to test the validity of the boost time_duration? and failing that can you suggest another method of reading a timestring and getting a duration from it? Note: I have tried the obvious testing methods that time_duration provides; is_not_a_date_time(), is_special() etc and they don't pick up that there is an issue. Using boost 1.38.0

    Read the article

  • boost::asio::async_resolve Problem

    - by Moo-Juice
    Hi All, I'm in the process of constructing a Socket class that uses boost::asio. To start with, I made a connect method that took a host and a port and resolved it to an IP address. This worked well, so I decided to look in to async_resolve. However, my callback always gets an error code of 995 (using the same destination host/port as when it worked synchronously). code: Function that starts the resolution: // resolve a host asynchronously template<typename ResolveHandler> void resolveHost(const String& _host, Port _port, ResolveHandler _handler) const { boost::asio::ip::tcp::endpoint ret; boost::asio::ip::tcp::resolver::query query(_host, boost::lexical_cast<std::string>(_port)); boost::asio::ip::tcp::resolver r(m_IOService); r.async_resolve(query, _handler); }; // eo resolveHost Code that calls this function: void Socket::connect(const String& _host, Port _port) { // Anon function for resolution of the host-name and asynchronous calling of the above auto anonResolve = [this](const boost::system::error_code& _errorCode, boost::asio::ip::tcp::resolver_iterator _epIt) { // raise event onResolve.raise(SocketResolveEventArgs(*this, !_errorCode ? (*_epIt).host_name() : String(""), _errorCode)); // perform connect, calling back to anonymous function if(!_errorCode) connect(*_epIt); }; // Resolve the host calling back to anonymous function Root::instance().resolveHost(_host, _port, anonResolve); }; // eo connect The message() function of the error_code is: The I/O operation has been aborted because of either a thread exit or an application request And my main.cpp looks like this: int _tmain(int argc, _TCHAR* argv[]) { morse::Root root; TextSocket s; s.connect("somehost.com", 1234); while(true) { root.performIO(); // calls io_service::run_one() } return 0; } Thanks in advance!

    Read the article

  • How to avoid using the plld.exe utility in VS2008 (for linking C++ and Prolog codes)

    - by Joshua Green
    Here is my code in its entirety: Trying "listing." at the Prolog prompt that pops up when I run the program confirms that my Prolog source code has been loaded (consulted). #include <iostream> #include <fstream> #include <string> #include <math.h> #include <stdio.h> #include <stdlib.h> #include <stdafx.h> using namespace std; #include "Windows.h" #include "ctype.h" #include "SWI-cpp.h" #include "SWI-Prolog.h" #include "SWI-Stream.h" int main(int argc, char** argv) { argc = 4; argv[0] = "libpl.dll"; argv[1] = "-G32m"; argv[2] = "-L32m"; argv[3] = "-T32m"; PL_initialise(argc, argv); if ( !PL_initialise(argc, argv) ) PL_halt(1); PlCall( "consult(swi('plwin.rc'))" ); PlCall( "consult('hello.pl')" ); PL_halt( PL_toplevel() ? 0 : 1 ); } So this is how to load a Prolog source code (hello.pl) at run time into VS2008 without having to use plld at the VS command prompt.

    Read the article

  • Noob boost::bind member function callback question

    - by shaz
    #include <boost/bind.hpp> #include <iostream> using namespace std; using boost::bind; class A { public: void print(string &s) { cout << s.c_str() << endl; } }; typedef void (*callback)(); class B { public: void set_callback(callback cb) { m_cb = cb; } void do_callback() { m_cb(); } private: callback m_cb; }; void main() { A a; B b; string s("message"); b.set_callback(bind(A::print, &a, s)); b.do_callback(); } So what I'm trying to do is to have the print method of A stream "message" to cout when b's callback is activated. I'm getting an unexpected number of arguments error from msvc10. I'm sure this is super noob basic and I'm sorry in advance.

    Read the article

  • Garbled text when constructing emails with vmime

    - by Klaus Fiedler
    Hey, my Qt C++ program has a part where it needs to send the first 128 characters or so of the output of a bash command to an email address. The output from the tty is captured in a text box in my gui called textEdit_displayOutput and put into my message I built using the Message Builder ( the object m_vmMessage ) Here is the relevant code snippet: m_vmMessage.getTextPart()->setCharset( vmime::charsets::US_ASCII ); m_vmMessage.getTextPart()->setText( vmime::create < vmime::stringContentHandler > ( ui->textEdit_displayOutput->toPlainText().toStdString() ) ); vmime::ref < vmime::message > msg = m_vmMessage.construct(); vmime::utility::outputStreamAdapter out( std::cout ); msg->generate( out ); Giving bash 'ls /' and a newline makes vmime give terminal output like this: ls /=0Abin etc=09 initrd.img.old mnt=09 sbin=09 tmp=09 vmlinuz.o= ld=0Aboot farts=09 lib=09=09 opt=09 selinux usr=0Acdrom home=09 = lost+found=09 proc srv=09 var=0Adev initrd.img media=09 root = Whereas it should look more like this: ls / bin etc initrd.img.old mnt sbin tmp vmlinuz.old boot farts lib opt selinux usr cdrom home lost+found proc srv var dev initrd.img media root sys vmlinuz 18:22> How do I encode the email properly? Does vmime just display it like that on purpose and the actual content of the email is ok?

    Read the article

  • Pattern for UI configuration

    - by TERACytE
    I have a Win32 C++ program that validates user input and updates the UI with status information and options. Currently it is written like this: void ShowError() { SetIcon(kError); SetMessageString("There was an error"); HideButton(kButton1); HideButton(kButton2); ShowButton(kButton3); } void ShowSuccess() { SetIcon(kError); std::String statusText (GetStatusText()); SetMessageString(statusText); HideButton(kButton1); HideButton(kButton2); ShowButton(kButton3); } // plus several more methods to update the UI using similar mechanisms I do not likes this because it duplicates code and causes me to update several methods if something changes in the UI. I am wondering if there is a design pattern or best practice to remove the duplication and make the functionality easier to understand and update. I could consolidate the code inside a config function and pass in flags to enable/disable UI items, but I am not convinced this is the best approach. Any suggestions and ideas?

    Read the article

  • g++ Linking Error on Mac while compiling FFMPEG

    - by Saptarshi Biswas
    g++ on Snow Leopard is throwing linking errors on the following piece of code test.cpp #include <iostream> using namespace std; #include <libavcodec/avcodec.h> // required headers #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); // offending library call return 0; } When I try to compile this using the following command g++ test.cpp -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I get the error Undefined symbols: "av_register_all()", referenced from: _main in ccUD1ueX.o ld: symbol(s) not found collect2: ld returned 1 exit status Interestingly, if I have an equivalent c code, test.c #include <stdio.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); return 0; } gcc compiles it just fine gcc test.c -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I am using Mac OS X 10.6.5 $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) FFMPEG's libavcodec, libavformat etc. are C libraries and I have built them on my machine like thus: ./configure --enable-gpl --enable-pthreads --enable-shared \ --disable-doc --enable-libx264 make && sudo make install As one would expect, libavformat indeed contains the symbol av_register_all $ nm /usr/local/lib/libavformat.a | grep av_register_all 0000000000000000 T _av_register_all 00000000000089b0 S _av_register_all.eh I am inclined to believe g++ and gcc have different views of the libraries on my machine. g++ is not able to pick up the right libraries. Any clue?

    Read the article

  • Extract a specific string from a curl'd result

    - by allentown
    Given this curl command: curl --user-agent "fogent" --silent -o page.html "http://www.google.com/search?q=insansiate" * Spelling is intentionally incorrect. I want to grab the suggestion as my result. I want to be able to either grep into the page.html file perhaps with grep -oE or pipe it right from curl and never store a file. The result should be: 'instantiate' I need only the word 'instantiate', or the phrase, whatever google is auto correcting, is what I am after. Here is the basic html that is returned: <span class=spell style="color:#cc0000">Did you mean: </span><a href="/search?hl=en&amp;ie=UTF-8&amp;&amp;sa=X&amp;ei=VEMUTMDqGoOINraK3NwL&amp;ved=0CB0QBSgA&amp;q=instantiate&amp;spell=1"class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp;<span class=std>Top 2 results shown</span> So perhaps from/to of the string below, which I hope is unique enough to cover all my bases. class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp; I keep running into issues with greedy grep; perhaps I should run it though an html prettify tool first to get a line break or 50 in there. I don't know of any simple way to do so in bash, which is what I would ideally like this to be in. I really don't want to deal with firing up perl, and making sure I have the correct module. Any suggestions, thank you?

    Read the article

  • Thoughts about alternatives to barplot-with-error-bars

    - by gd047
    I was thinking of an alternative to the barplot-with-error-bars plot. To get an idea by example, I roughly 'sketched' what I mean using the following code library(plotrix) plot(0:12,type="n",axes=FALSE) gradient.rect(1,0,3,8,col=smoothColors("red",38,"red"),border=NA,gradient="y") gradient.rect(4,0,6,6,col=smoothColors("blue",38,"blue"),border=NA,gradient="y") lines(c(2,2),c(5.5,10.5)) lines(c(2-.5,2+.5),c(10.5,10.5)) lines(c(2-.5,2+.5),c(5.5,5.5)) lines(c(5,5),c(4.5,7.5)) lines(c(5-.5,5+.5),c(7.5,7.5)) lines(c(5-.5,5+.5),c(4.5,4.5)) gradient.rect(7,8,9,10.5,col=smoothColors("red",100,"white"),border=NA,gradient="y") gradient.rect(7,5.5,9,8,col=smoothColors("white",100,"red"),border=NA,gradient="y") lines(c(7,9),c(8,8),lwd=3) gradient.rect(10,6,12,7.5,col=smoothColors("blue",100,"white"),border=NA,gradient="y") gradient.rect(10,4.5,12,6,col=smoothColors("white",100,"blue"),border=NA,gradient="y") lines(c(10,12),c(6,6),lwd=3) The idea was to use bars like the ones in the second pair, instead of those in the first. However, there is something that I would like to change in the colors. Instead of a linear gradient fill, I would like to adjust the color intensity in accordance with the values of the pdf of the mean estimator. Do you think it is possible? A slightly different idea (where gradient fill isn't an issue) was to use one (or 2 back-to-back) bell curve(s) filled with (solid) color, instead of a rectangle. See for example the shape that corresponds to the letter F here. In that case the bell-curve(s) should (ideally) be drawn using something like plot(x, dnorm(x, mean = my.mean, sd = std.error.of.the.mean)) I have no idea though, of a way to draw rotated (and filled with color) bell curves. Of course, all of the above may be freely judged as midnight springtime dreams :-)

    Read the article

  • Having issues with initializing character array

    - by quandrum
    Ok, this is for homework about hashtables, but this is the simple stuff I thought I was able to do from earlier classes, and I'm tearing my hair out. The professor is not being responsive enough, so I thought I'd try here. We have a hashtable of stock objects.The stock objects are created like so: stock("IBM", "International Business Machines", 2573, date(date::MAY, 23, 1967)) my constructor looks like: stock::stock(char const * const symbol, char const * const name, int sharePrice, date priceDate): symbol(NULL), name(NULL), sharePrice(sharePrice), dateOfPrice(priceDate) { setSymbol(symbol); setName(name); } and setSymbol looks like this: (setName is indentical): void stock::setSymbol(const char* symbol) { if (this->symbol) delete [] this->symbol; this->symbol = new char[strlen(symbol)+1]; strcpy(this->symbol,symbol); } and it refuses to allocate on the line this->symbol = new char[strlen(symbol)+1]; with a std::bad_alloc. name and symbol are declared char * name; char * symbol; I feel like this is exactly how I've done it in previous code.I'm sure it's something silly with pointers. Can anyone help?

    Read the article

  • Inject runtime exception to pthread sometime fails. How to fix that?

    - by lionbest
    I try to inject the exception to thread using signals, but some times the exception is not get caught. For example the following code: void _sigthrow(int sig) { throw runtime_error(strsignal(sig)); } struct sigaction sigthrow = {{&_sigthrow}}; void* thread1(void*) { sigaction(SIGINT,&sigthrow,NULL); try { while(1) usleep(1); } catch(exception &e) { cerr << "Thread1 catched " << e.what() << endl; } }; void* thread2(void*) { sigaction(SIGINT,&sigthrow,NULL); try { while(1); } catch(exception &e) { cerr << "Thread2 catched " << e.what() << endl; //never goes here } }; If I try to execute like: int main() { pthread_t p1,p2; pthread_create( &p1, NULL, &thread1, NULL ); pthread_create( &p2, NULL, &thread2, NULL ); sleep(1); pthread_kill( p1, SIGINT); pthread_kill( p2, SIGINT); sleep(1); return EXIT_SUCCESS; } I get the following output: Thread1 catched Interrupt terminate called after throwing an instance of 'std::runtime_error' what(): Interrupt Aborted How can I make second threat catch exception? Is there better idea about injecting exceptions?

    Read the article

  • How do i get the screen to pause?

    - by Dakota
    So im learning c++ and i was given this example and i wanted to run it. But i cannot get it to stay up, unless i change it. How do i get Microsoft visual 2010 to keep up the screen when it gets to the end of the program after I release it? include using namespace std; int area(int length, int width); /* function declaration */ /* MAIN PROGRAM: */ int main() { int this_length, this_width; cout << "Enter the length: "; /* <--- line 9 */ cin >> this_length; cout << "Enter the width: "; cin >> this_width; cout << "\n"; /* <--- line 13 */ cout << "The area of a " << this_length << "x" << this_width; cout << " rectangle is " << area(this_length, this_width); return 0; } /* END OF MAIN PROGRAM */ /* FUNCTION TO CALCULATE AREA: */ int area(int length, int width) /* start of function definition */ { int number; number = length * width; return number; } /* end of function definition */ /* END OF FUNCTION */

    Read the article

  • How to produce 64 bit masks?

    - by egiakoum1984
    Based on the following simple program the bitwise left shit operator works only for 32 bits. Is it true? #include <iostream> #include <stdlib.h> using namespace std; int main(void) { long long currentTrafficTypeValueDec; int input; cout << "Enter input:" << endl; cin >> input; currentTrafficTypeValueDec = 1 << (input - 1); cout << currentTrafficTypeValueDec << endl; cout << (1 << (input - 1)) << endl; return 0; } The output of the program: Enter input: 30 536870912 536870912 Enter input: 62 536870912 536870912 How could I produce 64-bit masks?

    Read the article

  • C++ Check Substring of a String

    - by user69514
    I'm trying to check whether or not the second argument in my program is a substring of the first argument. The problem is that it only work if the substring starts with the same letter of the string. .i.e Michigan - Mich (this works) Michigan - Mi (this works) Michigan - igan (this doesn't work) #include <stdio.h> #include <string.h> #include <string> using namespace std; bool my_strstr( string str, string sub ) { bool flag = true; int startPosition = -1; char subStart = str.at(0); char strStart; //find starting position for(int i=0; i<str.length(); i++){ if(str.at(i) == subStart){ startPosition = i; break; } } for(int i=0; i<sub.size(); i++){ if(sub.at(i) != str.at(startPosition)){ flag = false; break; } startPosition++; } return flag; } int main(int argc, char **argv){ if (argc != 3) { printf ("Usage: check <string one> <string two>\n"); } string str1 = argv[1]; string str2 = argv[2]; bool result = my_strstr(str1, str2); if(result == 1){ printf("%s is a substring of %s\n", argv[2], argv[1]); } else{ printf("%s is not a substring of %s\n", argv[2], argv[1]); } return 0; }

    Read the article

  • Problem with futures in c++0x .

    - by Eternal Learner
    Hi, I have written a small program , to understand how futures work in c++0x. while running the code I get an error like " error: 'printEn' was not declared in this scope". I am unable to understand what the problem is..Kindly point out what I am doing wrong here and if possible write the correct code for the same.. #include <future> #include <iostream> using namespace std; int printFn() { for(int i = 0; i < 100; i++) { cout << "thread " << i << endl; } return 1; } int main() { future<int> the_answer2=async(printEn); future<int> the_answer1=async(printEn); return 0; }

    Read the article

  • Getting base address of a process

    - by yoni0505
    I'm trying to make a program that read the timer value from Minesweeper. (OS is windows 7 64bit) Using cheat engine I found the base address of the variable, but it changes every time I run Minesweeper. What do I need to do to find out the base address automatically? Does it have something to do with the executable base address? Here's my code: #include <windows.h> #include <iostream> using namespace std; int main() { DWORD baseAddress = 0xFF1DAA38;//always changing DWORD offset1 = 0x18; DWORD offset2 = 0x20; DWORD pAddress1; DWORD pAddress2; float value = 0; DWORD pid; HWND hwnd; hwnd = FindWindow(NULL,"Minesweeper"); if(!hwnd)//didn't find the window { cout <<"Window not found!\n"; cin.get(); } else { GetWindowThreadProcessId(hwnd,&pid); HANDLE phandle = OpenProcess(PROCESS_VM_READ,0,pid);//get permission to read if(!phandle)//failed to get permission { cout <<"Could not get handle!\n"; cin.get(); } else { ReadProcessMemory(phandle,(void*)(baseAddress),&pAddress1,sizeof(pAddress1),0); ReadProcessMemory(phandle,(void*)(pAddress1 + offset1),&pAddress2,sizeof(pAddress2),0); while(1) { ReadProcessMemory(phandle,(void*)(pAddress2 + offset2),&value,sizeof(value),0); cout << value << "\n"; Sleep(1000); } } } }

    Read the article

  • Get an array of structures from native dll to c# application

    - by PaulH
    I have a C# .NET 2.0 CF project where I need to invoke a method in a native C++ DLL. This native method returns an array of type TableEntry. At the time the native method is called, I do not know how large the array will be. How can I get the table from the native DLL to the C# project? Below is effectively what I have now. // in C# .NET 2.0 CF project [StructLayout(LayoutKind.Sequential)] public struct TableEntry { [MarshalAs(UnmanagedType.LPWStr)] public string description; public int item; public int another_item; public IntPtr some_data; } [DllImport("MyDll.dll", CallingConvention = CallingConvention.Winapi, CharSet = CharSet.Auto)] public static extern bool GetTable(ref TableEntry[] table); SomeFunction() { TableEntry[] table = null; bool success = GetTable( ref table ); // at this point, the table is empty } // In Native C++ DLL std::vector< TABLE_ENTRY > global_dll_table; extern "C" __declspec(dllexport) bool GetTable( TABLE_ENTRY* table ) { table = &global_dll_table.front(); return true; } Thanks, PaulH

    Read the article

  • Poco library for c++, declare namespace for custom element

    - by Mikhail
    I want to create an XML document by building a DOM document from scratch, with syntax like: AutoPtr<Document> doc = new Document; AutoPtr<Element> root = doc->createElement("root"); doc->appendChild(root); AutoPtr<Element> element1 = doc->createElementNS("http://ns1", "ns1:element1"); root->appendChild(element1); AutoPtr<Element> element2 = doc->createElementNS("http://ns1", "ns1:element2"); root->appendChild(element2); DOMWriter writer; writer.setNewLine("\n"); writer.setOptions(XMLWriter::PRETTY_PRINT); writer.writeNode(std::cout, doc); But, when I write it, I get next result: <root> <ns1:element1 xmlns:ns1="http://ns1"/> <ns1:element2 xmlns:ns1="http://ns1"/> </root> So namespace ns1 declared two times, and I want to declare it inside "root" element. Is there way to get next representation: <root xmlns:ns1="http://ns1"/> <ns1:element1/> <ns1:element2/> </root>

    Read the article

  • Why can I call a non-const member function pointer from a const method?

    - by sdg
    A co-worker asked about some code like this that originally had templates in it. I have removed the templates, but the core question remains: why does this compile OK? #include <iostream> class X { public: void foo() { std::cout << "Here\n"; } }; typedef void (X::*XFUNC)() ; class CX { public: explicit CX(X& t, XFUNC xF) : object(t), F(xF) {} void execute() const { (object.*F)(); } private: X& object; XFUNC F; }; int main(int argc, char* argv[]) { X x; const CX cx(x,&X::foo); cx.execute(); return 0; } Given that CX is a const object, and its member function execute is const, therefore inside CX::execute the this pointer is const. But I am able to call a non-const member function through a member function pointer. Are member function pointers a documented hole in the const-ness of the world? What (presumably obvious to others) issue have we missed?

    Read the article

  • C89, Mixing Variable Declarations and Code

    - by rutski
    I'm very curious to know why exactly C89 compilers will dump on you when you try to mix variable declarations and code, like this for example: rutski@imac:~$ cat test.c #include <stdio.h> int main(void) { printf("Hello World!\n"); int x = 7; printf("%d!\n", x); return 0; } rutski@imac:~$ gcc -std=c89 -pedantic test.c test.c: In function ‘main’: test.c:7: warning: ISO C90 forbids mixed declarations and code rutski@imac:~$ Yes, you can avoid this sort of thing by staying away from -pedantic. But then your code is no longer standards compliant. And as anybody capable of answering this post probably already knows, this is not just a theoretical concern. Platforms like Microsoft's C compiler enforce this quick in the standard under any and all circumstances. Given how ancient C is, I would imagine that this feature is due to some historical issue dating back to the extraordinary hardware limitations of the 70's, but I don't know the details. Or am I totally wrong there?

    Read the article

  • Class Assignment Operators

    - by Maxpm
    I made the following operator overloading test: #include <iostream> #include <string> using namespace std; class TestClass { string ClassName; public: TestClass(string Name) { ClassName = Name; cout << ClassName << " constructed." << endl; } ~TestClass() { cout << ClassName << " destructed." << endl; } void operator=(TestClass Other) { cout << ClassName << " in operator=" << endl; cout << "The address of the other class is " << &Other << "." << endl; } }; int main() { TestClass FirstInstance("FirstInstance"); TestClass SecondInstance("SecondInstance"); FirstInstance = SecondInstance; SecondInstance = FirstInstance; return 0; } The assignment operator behaves as-expected, outputting the address of the other class. Now, how would I actually assign something from the other class? For example, something like this: void operator=(TestClass Other) { ClassName = Other.ClassName; }

    Read the article

  • scanf segfaults and various other anomalies inside while loop

    - by Shadow
    while(1){ //Command prompt char *command; printf("%s>",current_working_directory); scanf("%s",command);<--seg faults after input has been received. printf("\ncommand:%s\n",command); } I am getting a few different errors and they don't really seem reproducible(except for the segfault at this point .<). This code worked fine about 10 minutes ago, then it infinite looped the printf command and now it seg faults on the line mentioned above. The only thing I changed was scanf("%s",command); to what it currently is. If I change the command variable to be an array it works, obviously this is because the storage is set aside for it. 1) I got prosecuted about telling someone that they needed to malloc a pointer* (But that usually seems to solve the problem such as making it an array) 2) the command I am entering is "magic" 5 characters so there shouldn't be any crazy stack overflow. 3) I am running on mac OSX 10.6 with newest version of xCode(non-OS4) and standard gcc 4) this is how I compile the program: gcc --std=c99 -W sfs.c Just trying to figure out what is going on. Being this is for a school project I am never going to have to see again, I will just code some noob work around that would make my boss cry :) But for afterwards I would love to figure out why this is happening and not just make some fix for it, and if there is some fix for it why that fix works.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >