Search Results

Search found 3511 results on 141 pages for 'const correctness'.

Page 68/141 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Can operator<< in derived class call another operator<< in base class in c++?

    - by ivory
    In my code, Manager is derived from Employee and each of them have an operator<< override. class Employee{ protected: int salary; int rank; public: int getSalary()const{return salary;} int getRank()const{return rank;} Employee(int s, int r):salary(s), rank(r){}; }; ostream& operator<< (ostream& out, Employee& e){ out << "Salary: " << e.getSalary() << " Rank: " << e.getRank() << endl; return out; } class Manager: public Employee{ public: Manager(int s, int r): Employee(s, r){}; }; ostream& operator<< (ostream& out, Manager& m){ out << "Manager: "; cout << (Employee)m << endl; //can not compile, how to call function of Employee? return out; } I hoped cout << (Employee)m << endl; would call ostream& operator<< (ostream& out, Employee& e), but it failed.

    Read the article

  • PostMessage does not seem to be working.

    - by Vaccano
    I am trying to use PostMessage to send a tab key. Here is my code: // This class allows us to send a tab key when the the enter key // is pressed for the mooseworks mask control. public class MaskKeyControl : MaskedEdit { // [DllImport("coredll.dll", SetLastError = true, CharSet = CharSet.Auto)] // static extern IntPtr SendMessage(IntPtr hWnd, UInt32 Msg, Int32 wParam, Int32 lParam); [return: MarshalAs(UnmanagedType.Bool)] // I am calling this on a Windows Mobile device so the dll is coredll.dll [DllImport("coredll.dll", SetLastError = true)] static extern bool PostMessage(IntPtr hWnd, uint Msg, Int32 wParam, Int32 lParam); public const Int32 VK_TAB = 0x09; public const Int32 WM_KEYDOWN = 0x100; protected override void OnKeyDown(KeyEventArgs e) { if (e.KeyData == Keys.Enter) { PostMessage(this.Handle, WM_KEYDOWN, VK_TAB, 0); return; } base.OnKeyDown(e); } protected override void OnKeyPress(KeyPressEventArgs e) { if (e.KeyChar == '\r') e.Handled = true; base.OnKeyPress(e); } } When I press enter the code gets called, but nothing happens. Then I press TAB and it works fine. (So there is something wrong with my sending of the Tab Message.)

    Read the article

  • c++ passing unknown type to a function and any Class type definition

    - by user259789
    I am trying to create a generic class to write and read Objects to/from file. Called it ActiveRecord class only has one method, which saves the class itself: void ActiveRecord::saveRecord(){ string fileName = "data.dat"; ofstream stream(fileName.c_str(), ios::out); if (!stream) { cerr << "Error opening file: " << fileName << endl; exit(1); } stream.write(reinterpret_cast<const char *> (this), sizeof(ActiveRecord)); stream.close(); } now I'm extending this class with User class: class User : public ActiveRecord { public: User(void); ~User(void); string name; string lastName; }; to create and save the user I would like to do something like: User user = User(); user.name = "John"; user.lastName = "Smith" user.save(); how can I get this ActiveRecord::saveRecord() method to take any object, and class definition so it writes whatever i send it: to look like: void ActiveRecord::saveRecord(foo_instance, FooClass){ string fileName = "data.dat"; ofstream stream(fileName.c_str(), ios::out); if (!stream) { cerr << "Error opening file: " << fileName << endl; exit(1); } stream.write(reinterpret_cast<const char *> (foo_instance), sizeof(FooClass)); stream.close(); } and while we're at it, what is the default Object type in c++. eg. in objective-c it's id in java it's Object in AS3 it's Object what is it in C++??

    Read the article

  • Strategy Pattern with Type Reflection affecting Performances ?

    - by Aurélien Ribon
    Hello ! I am building graphs. A graph consists of nodes linked each other with links (indeed my dear). In order to assign a given behavior to each node, I implemented the strategy pattern. class Node { public BaseNodeBehavior Behavior {get; set;} } As a result, in many parts of the application, I am extensively using type reflection to know which behavior a node is. if (node.Behavior is NodeDataOutputBehavior) workOnOutputNode(node) .... My graph can get thousands of nodes. Is type reflection greatly affecting performances ? Should I use something else than the strategy pattern ? I'm using strategy because I need behavior inheritance. For example, basically, a behavior can be Data or Operator, a Data behavior can IO, Const or Intermediate and finally an IO behavior can be Input or Output. So if I use an enumeration, I wont be able to test for a node behavior to be of data kind, I will need to test it to be [Input, Output, Const or Intermediate]. And if later I want to add another behavior of Data kind, I'm screwed, every data-testing method will need to be changed.

    Read the article

  • non-copyable objects and value initialization: g++ vs msvc

    - by R Samuel Klatchko
    I'm seeing some different behavior between g++ and msvc around value initializing non-copyable objects. Consider a class that is non-copyable: class noncopyable_base { public: noncopyable_base() {} private: noncopyable_base(const noncopyable_base &); noncopyable_base &operator=(const noncopyable_base &); }; class noncopyable : private noncopyable_base { public: noncopyable() : x_(0) {} noncopyable(int x) : x_(x) {} private: int x_; }; and a template that uses value initialization so that the value will get a known value even when the type is POD: template <class T> void doit() { T t = T(); ... } and trying to use those together: doit<noncopyable>(); This works fine on msvc as of VC++ 9.0 but fails on every version of g++ I tested this with (including version 4.5.0) because the copy constructor is private. Two questions: Which behavior is standards compliant? Any suggestion of how to work around this in gcc (and to be clear, changing that to T t; is not an acceptable solution as this breaks POD types). P.S. I see the same problem with boost::noncopyable.

    Read the article

  • Project Euler Problem #11

    - by SoulBeaver
    Source: http://projecteuler.net/index.php?section=problems&id=11 Quick overview: Take a 20x20 grid of numbers and compute the largest product of 4 pairs of numbers in either horizontal, vertical, or diagonal. My current approach is to divide the 20x20 grid up into single rows and single columns and go from there with a much more manageable grid. The code I'm using to divide the rows into rows is void fillRows ( string::const_iterator& fieldIter, list<int>& rowElements, vector<list<int>>& rows ) { int count(0); for( ; fieldIter < field.end(); ++fieldIter ) { if(isdigit(field[*fieldIter])) { rowElements.push_back(toInt(field[*fieldIter])); ++count; } if(count == 40) { rows.push_back(rowElements); count = 0; rowElements.clear(); } } } Short explanation: I have the field set as static const std::string field and I am filling a vector with lists of rows. Why a list? Because the queue doesn't have a clear function. Also practice using STL container lists and not ones I write myself. However, this thing isn't working. Oftentimes I see it omitting a character( function toInt parses the const char as int ) and I end up with 18 rows, two rows short of the 20x20 grid. The length of the rows seem good. Rows: 18 RowElements[0]: 40 (instead of pairs I saved each number individually. Will fix that later) What am I doing wrong?

    Read the article

  • [MFC] Creating multiple dialogs in an MFC app with no main Window, they become children of each othe

    - by John
    (title updated) Following on from this question, now I have a clearer picture what's going on... I have a MFC application with no main window, which exposes an API to create dialogs. When I call some of these methods repeatedly, the dialogs created are parented to each other instead of all being parented to the desktop... I have no idea why. But anyway even after creation, I am unable to change the parent back to NULL or CWnd::GetDesktopWindow()... if I call SetParent followed by GetParent, nothing has changed. So apart from the really weird question of why Windows is magically parenting each dialog to the last one created, is there anything I'm missing to be able to set these windows as children of the desktop? UPDATED: I have found the reason for all this, but not the solution. From my dialog constructor, we end up in: BOOL CDialog::CreateIndirect(LPCDLGTEMPLATE lpDialogTemplate, CWnd* pParentWnd, void* lpDialogInit, HINSTANCE hInst) { ASSERT(lpDialogTemplate != NULL); if (pParentWnd == NULL) pParentWnd = AfxGetMainWnd(); m_lpDialogInit = lpDialogInit; return CreateDlgIndirect(lpDialogTemplate, pParentWnd, hInst); } Note: if (pParentWnd == NULL)pParentWnd = AfxGetMainWnd(); The call-stack from my dialog constructor looks like this: mfc80d.dll!CDialog::CreateIndirect(const DLGTEMPLATE * lpDialogTemplate=0x005931a8, CWnd * pParentWnd=0x00000000, void * lpDialogInit=0x00000000, HINSTANCE__ * hInst=0x00400000) mfc80d.dll!CDialog::CreateIndirect(void * hDialogTemplate=0x005931a8, CWnd * pParentWnd=0x00000000, HINSTANCE__ * hInst=0x00400000) mfc80d.dll!CDialog::Create(const char * lpszTemplateName=0x0000009d, CWnd * pParentWnd=0x00000000) mfc80d.dll!CDialog::Create(unsigned int nIDTemplate=157, CWnd * pParentWnd=0x00000000) MyApp.exe!CMyDlg::CMyDlg(CWnd * pParent=0x00000000) Running in the debugger, if I manually change pParentWnd back to 0 in CDialog::CreateIndirect, everything works fine... but how do I stop it happening in the first place?

    Read the article

  • Problem Loading multiple textures using multiple shaders with GLSL

    - by paj777
    I am trying to use multiple textures in the same scene but no matter what I try the same texture is loaded for each object. So this what I am doing at the moment, I initialise each shader: rightWall.SendShaders("wall.vert","wall.frag","brick3.bmp", "wallTex", 0); demoFloor.SendShaders("floor.vert","floor.frag","dirt1.bmp", "floorTex", 1); The code in SendShaders is: GLuint vert,frag; glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); char *vs = NULL,*fs = NULL; vert = glCreateShader(GL_VERTEX_SHADER); frag = glCreateShader(GL_FRAGMENT_SHADER); vs = textFileRead(vertFile); fs = textFileRead(fragFile); const char * ff = fs; const char * vv = vs; glShaderSource(vert, 1, &vv, NULL); glShaderSource(frag, 1, &ff, NULL); free(vs); free(fs); glCompileShader(vert); glCompileShader(frag); program = glCreateProgram(); glAttachShader(program, frag); glAttachShader(program, vert); glLinkProgram(program); glUseProgram(program); LoadGLTexture(textureImage, texture); GLint location = glGetUniformLocation(program, textureName); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture); glUniform1i(location, 0); And then in the main loop: rightWall.UseShader(); rightWall.Draw(); demoFloor.UseShader(); demoFloor.Draw(); Which ever shader is initialised last is the texture which is used for both objects. Thank you for your time and I appreciate any comments.

    Read the article

  • Access Violation in std::pair

    - by sameer karjatkar
    I have an application which is trying to populate a pair. Out of nowhere the application crashes. The Windbg analysis on the crash dump suggests: PRIMARY_PROBLEM_CLASS: INVALID_POINTER_READ DEFAULT_BUCKET_ID: INVALID_POINTER_READ STACK_TEXT: 0389f1dc EPFilter32!std::vector<std::pair<unsigned int,unsigned int>,std::allocator<std::pair<unsigned int,unsigned int> > >::size+0xc INVALID_POINTER_READ_c0000005_Test.DLL!std::vector_std::pair_unsigned_int, unsigned_int_,std::allocator_std::pair_unsigned_int,unsigned_int_____::size Following is the code snap in the code where it fails: for (unsigned i1 = 0; i1 < size1; ++i1) { for (unsigned i2 = 0; i2 < size2; ++i2) { const branch_info& b1 = en1.m_branches[i1]; //Exception here :crash const branch_info& b2 = en2.m_branches[i2]; } } where branch_info is std::pair<unsigned int,unsigned int> and the en1.m_branches[i1] fetches me a pair value.

    Read the article

  • How can I create a properly sizing QWebView?

    - by chacham15
    I want to make a QWebView appear expanding to the width and height so that ideally it will have no scroll bars. Some websites may have fixed widths that wont allow this, but I am not concerned with those. In any case, I cannot do as I wish because QWebView implements sizeHint as follows: QSize QWebView::sizeHint() const { return QSize(800, 600); // ####... } This is incorrect on a number of levels. 1. It doesnt at all take into account the size of the actual web page. 2. It doesnt take into account that the height and width are related to each other. (To prove #2 think about text in web pages that wraps to the next line.) As a simple fix I tried to do (where QResizingWebView extends QWebView): QSize QResizingWebView::sizeHint() const{ return this-page()-mainFrame()-contentsSize(); } While this is closer to the result, it also has 2 flaws. 1. It doesnt take into account the relation between the displayed width/height. 2. this->page()->mainFrame()->contentsSize() is inaccurate from what I can tell from my preliminary testing (it has returned heights larger than it should under many cases, although this may be related to #1). Does anyone have any tips to fix this?

    Read the article

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • How to asynchronously read to std::string using Boost::asio?

    - by SpyBot
    Hello. I'm learning Boost::asio and all that async stuff. How can I asynchronously read to variable user_ of type std::string? Boost::asio::buffer(user_) works only with async_write(), but not with async_read(). It works with vector, so what is the reason for it not to work with string? Is there another way to do that besides declaring char user_[max_len] and using Boost::asio::buffer(user_, max_len)? Also, what's the point of inheriting from boost::enable_shared_from_this<Connection> and using shared_from_this() instead of this in async_read() and async_write()? I've seen that a lot in the examples. Here is a part of my code: class Connection { public: Connection(tcp::acceptor &acceptor) : acceptor_(acceptor), socket_(acceptor.get_io_service(), tcp::v4()) { } void start() { acceptor_.get_io_service().post( boost::bind(&Connection::start_accept, this)); } private: void start_accept() { acceptor_.async_accept(socket_, boost::bind(&Connection::handle_accept, this, placeholders::error)); } void handle_accept(const boost::system::error_code& err) { if (err) { disconnect(); } else { async_read(socket_, boost::asio::buffer(user_), boost::bind(&Connection::handle_user_read, this, placeholders::error, placeholders::bytes_transferred)); } } void handle_user_read(const boost::system::error_code& err, std::size_t bytes_transferred) { if ( err or (bytes_transferred != sizeof(user_)) ) { disconnect(); } else { ... } } ... void disconnect() { socket_.shutdown(tcp::socket::shutdown_both); socket_.close(); socket_.open(tcp::v4()); start_accept(); } tcp::acceptor &acceptor_; tcp::socket socket_; std::string user_; std::string pass_; ... };

    Read the article

  • Compile Assembly Output generated by VC++?

    - by SDD
    I have a simple hello world C program and compile it with /FA. As a consequence, the compiler also generates the corresponding assembly listing. Now I want to use masm/link to assemble an executable from the generated .asm listing. The following command line yields 3 linker errors: \masm32\bin\ml /I"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include" /c /coff asm_test.asm \masm32\bin\link /SUBSYSTEM:CONSOLE /LIBPATH:"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\lib" asm_test.obj indicating that the C-runtime functions were not linked to the object files produced earlier: asm_test.obj : error LNK2001: unresolved external symbol @__security_check_cookie@4 asm_test.obj : error LNK2001: unresolved external symbol _printf LINK : error LNK2001: unresolved external symbol _wmainCRTStartup asm_test.exe : fatal error LNK1120: 3 unresolved externals Here is the generated assembly listing ; Listing generated by Microsoft (R) Optimizing Compiler Version 15.00.30729.01 TITLE c:\asm_test\asm_test\asm_test.cpp .686P .XMM include listing.inc .model flat INCLUDELIB OLDNAMES PUBLIC ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ ; `string' EXTRN @__security_check_cookie@4:PROC EXTRN _printf:PROC ; COMDAT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ CONST SEGMENT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ DB 'hello world!', 0aH, 00H ; `string' CONST ENDS PUBLIC _wmain ; Function compile flags: /Ogtpy ; COMDAT _wmain _TEXT SEGMENT _argc$ = 8 ; size = 4 _argv$ = 12 ; size = 4 _wmain PROC ; COMDAT ; File c:\users\octon\desktop\asm_test\asm_test\asm_test.cpp ; Line 21 push OFFSET ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ call _printf add esp, 4 ; Line 22 xor eax, eax ; Line 23 ret 0 _wmain ENDP _TEXT ENDS END I am using the latest masm32 version (6.14.8444).

    Read the article

  • Best way to detect similar email addresses?

    - by Chris
    I have a list of ~20,000 email addresses, some of which I know to be fraudulent attempts to get around a "1 per e-mail" limit. ([email protected], [email protected], [email protected], etc...). I want to find similar email addresses for evaluation. Currently I'm using a levenshtein algorithm to check each e-mail against the others in the list and report any with an edit distance of less than 2. However, this is painstakingly slow. Is there a more efficient approach? The test code I'm using now is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace LevenshteinAnalyzer { class Program { const string INPUT_FILE = @"C:\Input.txt"; const string OUTPUT_FILE = @"C:\Output.txt"; static void Main(string[] args) { var inputWords = File.ReadAllLines(INPUT_FILE); var outputWords = new SortedSet<string>(); for (var i = 0; i < inputWords.Length; i++) { if (i % 100 == 0) Console.WriteLine("Processing record #" + i); var word1 = inputWords[i].ToLower(); for (var n = i + 1; n < inputWords.Length; n++) { if (i == n) continue; var word2 = inputWords[n].ToLower(); if (word1 == word2) continue; if (outputWords.Contains(word1)) continue; if (outputWords.Contains(word2)) continue; var distance = LevenshteinAlgorithm.Compute(word1, word2); if (distance <= 2) { outputWords.Add(word1); outputWords.Add(word2); } } } File.WriteAllLines(OUTPUT_FILE, outputWords.ToArray()); Console.WriteLine("Found {0} words", outputWords.Count); } } }

    Read the article

  • function pointers callbacks C

    - by robUK
    Hello, I have started to review callbacks. I found this link: http://stackoverflow.com/questions/142789/what-is-a-callback-in-c-and-how-are-they-implemented which has a good example of callback which is very similar to what we use at work. However, I have tried to get it to work, but I have many errors. #include <stdio.h> /* Is the actual function pointer? */ typedef void (*event_cb_t)(const struct event *evt, void *user_data); struct event_cb { event_cb_t cb; void *data; }; int event_cb_register(event_ct_t cb, void *user_data); static void my_event_cb(const struct event *evt, void *data) { /* do some stuff */ } int main(void) { event_cb_register(my_event_cb, &my_custom_data); struct event_cb *callback; callback->cb(event, callback->data); return 0; } I know that callback use function pointers to store an address of a function. But there is a few things that I find I don't understand. That is what is meet by "registering the callback" and "event dispatcher"? Many thanks for any advice,

    Read the article

  • Crash generated during destruction of hash_map

    - by Alien01
    I am using hash_map in application as typedef hash_map<DWORD,CComPtr<IInterfaceXX>> MapDword2Interface; In main application I am using static instance of this map static MapDword2Interface m_mapDword2Interface; I have got one crash dump from one of the client machines which point to the crash in clearing this map I opened that crash dump and here is assembly during debugging > call std::list<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> >,std::allocator<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> > > >::clear > mov eax,dword ptr [CMainApp::m_mapDword2Interface+8 (49XXXXX)] Here is code where crash dump is pointing. Below code is from stl:list file void clear() { // erase all #if _HAS_ITERATOR_DEBUGGING this->_Orphan_ptr(*this, 0); #endif /* _HAS_ITERATOR_DEBUGGING */ _Nodeptr _Pnext; _Nodeptr _Pnode = _Nextnode(_Myhead); _Nextnode(_Myhead) = _Myhead; _Prevnode(_Myhead) = _Myhead; _Mysize = 0; for (; _Pnode != _Myhead; _Pnode = _Pnext) { // delete an element _Pnext = _Nextnode(_Pnode); this->_Alnod.destroy(_Pnode); this->_Alnod.deallocate(_Pnode, 1); } } Crash is pointing to the this->_Alnod.destroy(_Pnode); statement in above code. I am not able to guess it, what could be reason. Any ideas??? How can I make sure, even is there is something wrong with the map , it should not crash?

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • No output got after execution.

    - by wilson88
    I am still stuck with getting output for a copied vector. Probably something am not doing right. I get no output. void Auctioneer::accept_bids(const BidList& bid){ BidList list; BidList list2; BidList::const_iterator iter; copy (list.begin(),list.end(), back_inserter(list2)); for(iter=list2.begin(); iter != list2.end(); iter++) { const Bid& bid = *iter; // Get a reference to the Bid object that the iterator points to cout << "Bid id : " << bid.bidId << endl; cout << "Trd id : " << bid.trdId << endl; cout << "Quantity: " << bid.qty << endl; cout << "Price : " << bid.price << endl; cout << "Type : " << bid.type << endl; } }

    Read the article

  • What makes this "declarator invalid"? C++

    - by nieldw
    I have Vertex template in vertex.h. From my graph.h: 20 template<class edgeDecor, class vertexDecor, bool dir> 21 class Vertex; which I use in my Graph template. I've used the Vertex template successfully throughout my Graph, return pointers to Vertices, etc. Now for the first time I am trying to declare and instantiate a Vertex object, and gcc is telling me that my 'declarator' is 'invalid'. How can this be? 81 template<class edgeDecor, class vertexDecor, bool dir> 82 Graph<edgeDecor,int,dir> Graph<edgeDecor,vertexDecor,dir>::Dijkstra(vertex s, bool print = false) const 83 { 84 /* Construct new Graph with apropriate decorators */ 85 Graph<edgeDecor,int,dir> span = new Graph<edgeDecor,int,dir>(); 86 span.E.reserve(this->E.size()); 87 88 typename Vertex<edgeDecor,int,dir> v = new Vertex(INT_MAX); 89 span.V = new vector<Vertex<edgeDecor,int,dir> >(this->V.size,v); 90 }; And gcc is saying: graph.h: In member function ‘Graph<edgeDecor, int, dir> Graph<edgeDecor, vertexDecor, dir>::Dijkstra(Vertex<edgeDecor, vertexDecor, dir>, bool) const’: graph.h:88: error: invalid declarator before ‘v’ graph.h:89: error: ‘v’ was not declared in this scope I know this is probably another noob question, but I'll appreciate any help.

    Read the article

  • Why MSMQ won't send a space character?

    - by cyclotis04
    I'm exploring MSMQ services, and I wrote a simple console client-server application that sends each of the client's keystrokes to the server. Whenever hit a control character (DEL, ESC, INS, etc) the server understandably throws an error. However, whenever I type a space character, the server receives the packet but doesn't throw an error and doesn't display the space. Server: namespace QIM { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static readonly object _mqLock = new object(); static XmlSerializer xs; static void Main(string[] args) { lock (_mqLock) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); } xs = new XmlSerializer(typeof(string)); _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); while (Console.ReadKey().Key != ConsoleKey.Escape) { } } static void OnReceive(IAsyncResult result) { Message msg; lock (_mqLock) { try { msg = _mq.EndReceive(result); Console.Write("."); Console.Write(xs.Deserialize(msg.BodyStream)); } catch (Exception ex) { Console.Write(ex); } } _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); } } } Client: namespace QIM_Client { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static void Main(string[] args) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); ConsoleKeyInfo key = new ConsoleKeyInfo(); while (key.Key != ConsoleKey.Escape) { key = Console.ReadKey(); _mq.Send(key.KeyChar.ToString()); } } } } Client Input: Testing, Testing... Server Output: .T.e.s.t.i.n.g.,..T.e.s.t.i.n.g...... You'll notice that the space character sends a message, but the character isn't displayed.

    Read the article

  • implement SIMD in C++

    - by Hristo
    I'm working on a bit of code and I'm trying to optimize it as much as possible, basically get it running under a certain time limit. The following makes the call... static affinity_partitioner ap; parallel_for(blocked_range<size_t>(0, T), LoopBody(score), ap); ... and the following is what is executed. void operator()(const blocked_range<size_t> &r) const { int temp; int i; int j; size_t k; size_t begin = r.begin(); size_t end = r.end(); for(k = begin; k != end; ++k) { // for each trainee temp = 0; for(i = 0; i < N; ++i) { // for each sample int trr = trRating[k][i]; int ei = E[i]; for(j = 0; j < ei; ++j) { // for each expert temp += delta(i, trr, exRating[j][i]); } } myscore[k] = temp; } } I'm using Intel's TBB to optimize this. But I've also been reading about SIMD and SSE2 and things along that nature. So my question is, how do I store the variables (i,j,k) in registers so that they can be accessed faster by the CPU? I think the answer has to do with implementing SSE2 or some variation of it, but I have no idea how to do that. Any ideas? Thanks, Hristo

    Read the article

  • visual studio macro - copy a definition or declaration from/to .h to/from .cpp

    - by Michael
    Is it possible to do a macro that copies a definition of a function to a declaration, and also the opposite? For instance class Foo { Foo(int aParameter, int aDefaultParameter = 0); int someMethod(char aCharacter) const; }; from the .h file would be: Foo::Foo(int aParameter, int aDefaultParameter){ // } int Foo::someMethod(char aCharacter) const { return 0; } in the .cpp file. The opposite wouldn't work with the default value, but it would still be cool if it copied the declaration into the class in the header file. Also if it could return a default value as in someMethod (based on the return value from the declaration). Personally I tried to do macrocoding some year ago (I think it was around 2005) but the tutorials and documentation of macros was thin (or I hadn't searched enough). I ended up going through the examples that they had in the IDE but gave up when I figured it would take too long to learn. I would however like to give it a try again. So if there are anyone with knowledge of good tutorials or documentation that aims at Visual Studio .Net (and maybe also covers the above problem) I would probably accept that as an answer as well :)

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • How to give a textbox a fixed width of 17,5 cm?

    - by Natrium
    I have an application with a textbox, and the width of the textbox on the screen must always be 17,5 centimeters on the screen of the user. This is what I tried so far: const double centimeter = 17.5; // the width I need const double inches = centimeter * 0.393700787; // convert centimeter to inches float dpi = GetDpiX(); // get the dpi. 96 in my case. var pixels = dpi*inches; // this should give me the amount of pixels textbox1.Width = Convert.ToInt32(pixels); // set it. Done. private float GetDpiX() { floar returnValue; Graphics graphics = CreateGraphics(); returnValue = graphics.DpiX; graphics.Dispose(); // don’t forget to release the unnecessary resources return returnValue; } But this gives me different sizes with different resolutions. It gives me 13 cm with 1680 x 1050 and 19,5 cm with 1024 x 768. What am I doing wrong?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >