Search Results

Search found 2566 results on 103 pages for 'struct'.

Page 96/103 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Reading and writing C++ vector to a file

    - by JB
    For some graphics work I need to read in a large amount of data as quickly as possible and would ideally like to directly read and write the data structures to disk. Basically I have a load of 3d models in various file formats which take too long to load so I want to write them out in their "prepared" format as a cache that will load much faster on subsequent runs of the program. Is it safe to do it like this? My worries are around directly reading into the data of the vector? I've removed error checking, hard coded 4 as the size of the int and so on so that i can give a short working example, I know it's bad code, my question really is if it is safe in c++ to read a whole array of structures directly into a vector like this? I believe it to be so, but c++ has so many traps and undefined behavour when you start going low level and dealing directly with raw memory like this. I realise that number formats and sizes may change across platforms and compilers but this will only even be read and written by the same compiler program to cache data that may be needed on a later run of the same program. #include <fstream> #include <vector> using namespace std; struct Vertex { float x, y, z; }; typedef vector<Vertex> VertexList; int main() { // Create a list for testing VertexList list; Vertex v1 = {1.0f, 2.0f, 3.0f}; list.push_back(v1); Vertex v2 = {2.0f, 100.0f, 3.0f}; list.push_back(v2); Vertex v3 = {3.0f, 200.0f, 3.0f}; list.push_back(v3); Vertex v4 = {4.0f, 300.0f, 3.0f}; list.push_back(v4); // Write out a list to a disk file ofstream os ("data.dat", ios::binary); int size1 = list.size(); os.write((const char*)&size1, 4); os.write((const char*)&list[0], size1 * sizeof(Vertex)); os.close(); // Read it back in VertexList list2; ifstream is("data.dat", ios::binary); int size2; is.read((char*)&size2, 4); list2.resize(size2); // Is it safe to read a whole array of structures directly into the vector? is.read((char*)&list2[0], size2 * sizeof(Vertex)); }

    Read the article

  • Efficient list compacting

    - by Patrik
    Suppose you have a list of unsigned ints. Suppose some elements are equal to 0 and you want to push them back. Currently I use this code (list is a pointer to a list of unsigned ints of size n for (i = 0; i < n; ++i) { if (list[i]) continue; int j; for (j = i + 1; j < n && !list[j]; ++j); int z; for (z = j + 1; z < n && list[z]; ++z); if (j == n) break; memmove(&(list[i]), &(list[j]), sizeof(unsigned int) * (z - j))); int s = z - j + i; for(j = s; j < z; ++j) list[j] = 0; i = s - 1; } Can you think of a more efficient way to perform this task? The snippet is purely theoretical, in the production code, each element of list is a 64 bytes struct EDIT: I'll post my solution. Many thanks to Jonathan Leffler. void RemoveDeadParticles(int * list, int * n) { int i, j = *n - 1; for (; j >= 0 && list[j] == 0; --j); for (i = 0; i < j; ++i) { if (list[i]) continue; memcpy(&(list[i]), &(list[j]), sizeof(int)); list[j] = 0; for (; j >= 0 && list[j] == 0; --j); if (i == j) break; } *n = i + 1; }

    Read the article

  • OpenGL ES functions not accepting values originating outside of it's view

    - by Josh Elsasser
    I've been unable to figure this out on my own. I currently have an Open GLES setup where a view controller both updates a game world (with a dt), fetches the data I need to render, passes it off to an EAGLView through two structures (built of Apple's ES1Renderer), and draws the scene. Whenever a value originates outside of the Open GL view, it can't be used to either translate objects using glTranslatef, or set up the scene using glOrthof. If I assign a new value to something, it will work - even if it is the exact same number. The two structures I have each contain a variety of floating-point numbers and booleans, along with two arrays. I can log the values from within my renderer - they make it there - but I receive errors from OpenGL if I try to do anything with them. No crashes result, but the glOrthof call doesn't work if I don't set the camera values to anything different. Code used to set up scene: [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); //clears the color buffer bit glClear(GL_COLOR_BUFFER_BIT); glMatrixMode(GL_PROJECTION); //sets up the scene w/ ortho projection glViewport(0, 0, 320, 480); glLoadIdentity(); glOrthof(320, 0, dynamicData.cam_x2, dynamicData.cam_x1, 1.0, -1.0); glClearColor(1.0, 1.0, 1.0, 1.0); /*error checking code here*/ "dynamicData" (which is replaced every frame) is created within my game simulation. From within my controller, I call a method (w/in my simulation) that returns it, and pass the result on to the EAGLView, which passes it on to the renderer. I haven't been able to come up with a better solution for this - suggestions in this regard would be greatly appreciated as well. Also, this function doesn't work as well (values originate in the same place): glTranslatef(dynamicData.ship_x, dynamicData.ship_y, 0.0); Thanks in advance. Additional Definitions: Structure (declared in a separate header): typedef struct { float ship_x, ship_y; float cam_x1, cam_x2; } dynamicRenderData; Render data getter (and builder) (every frame) - (dynamicData)getDynRenderData { //d_rd is an ivar, zeroed on initialization d_rd.ship_x = mainShip.position.x; d_rd.ship_y = mainShip.position.y; d_rd.cam_x1 = d_rd.ship_x - 30.0f; d_rd.cam_x2 = d_rd.cam_x1 + 480.0f; return d_rd; } Zeroed at start. (d_rd.ship_x = 0;, etc…) Setting up the view. Prototype (GLView): - (void)draw: (dynamicRenderData)dynamicData Prototype (Renderer): - (void)drawView: (dynamicRenderData)dynamicData How it's called w/in the controller: //controller [glview draw: [world getDynRenderData]]; //glview (within draw) [renderer drawView: dynamicData];

    Read the article

  • ReadFile doesn't work asynchronously on Win7 and Win2k8

    - by f0b0s
    According to MSDN ReadFile can read data 2 different ways: synchronously and asynchronously. I need the second one. The folowing code demonstrates usage with OVERLAPPED struct: #include <windows.h> #include <stdio.h> #include <time.h> void Read() { HANDLE hFile = CreateFileA("c:\\1.avi", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if ( hFile == INVALID_HANDLE_VALUE ) { printf("Failed to open the file\n"); return; } int dataSize = 256 * 1024 * 1024; char* data = (char*)malloc(dataSize); memset(data, 0xFF, dataSize); OVERLAPPED overlapped; memset(&overlapped, 0, sizeof(overlapped)); printf("reading: %d\n", time(NULL)); BOOL result = ReadFile(hFile, data, dataSize, NULL, &overlapped); printf("sent: %d\n", time(NULL)); DWORD bytesRead; result = GetOverlappedResult(hFile, &overlapped, &bytesRead, TRUE); // wait until completion - returns immediately printf("done: %d\n", time(NULL)); CloseHandle(hFile); } int main() { Read(); } On Windows XP output is: reading: 1296651896 sent: 1296651896 done: 1296651899 It means that ReadFile didn't block and returned imediatly at the same second, whereas reading process continued for 3 seconds. It is normal async reading. But on windows 7 and windows 2008 I get following results: reading: 1296661205 sent: 1296661209 done: 1296661209. It is a behavior of sync reading. MSDN says that async ReadFile sometimes can behave as sync (when the file is compressed or encrypted for example). But the return value in this situation should be TRUE and GetLastError() == NO_ERROR. On Windows 7 I get FALSE and GetLastError() == ERROR_IO_PENDING. So WinApi tells me that it is an async call, but when I look at the test I see that it is not! I'm not the only one who found this "bug": read the comment on ReadFile MSDN page. So what's the solution? Does anybody know? It is been 14 months after Denis found this strange behavior.

    Read the article

  • Parallel programming in C#

    - by Alxandr
    I'm interested in learning about parallel programming in C#.NET (not like everything there is to know, but the basics and maybe some good-practices), therefore I've decided to reprogram an old program of mine which is called ImageSyncer. ImageSyncer is a really simple program, all it does is to scan trough a folder and find all files ending with .jpg, then it calculates the new position of the files based on the date they were taken (parsing of xif-data, or whatever it's called). After a location has been generated the program checks for any existing files at that location, and if one exist it looks at the last write-time of both the file to copy, and the file "in its way". If those are equal the file is skipped. If not a md5 checksum of both files is created and matched. If there is no match the file to be copied is given a new location to be copied to (for instance, if it was to be copied to "C:\test.jpg" it's copied to "C:\test(1).jpg" instead). The result of this operation is populated into a queue of a struct-type that contains two strings, the original file and the position to copy it to. Then that queue is iterated over untill it is empty and the files are copied. In other words there are 4 operations: 1. Scan directory for jpegs 2. Parse files for xif and generate copy-location 3. Check for file existence and if needed generate new path 4. Copy files And so I want to rewrite this program to make it paralell and be able to perform several of the operations at the same time, and I was wondering what the best way to achieve that would be. I've came up with two different models I can think of, but neither one of them might be any good at all. The first one is to parallelize the 4 steps of the old program, so that when step one is to be executed it's done on several threads, and when the entire of step 1 is finished step 2 is began. The other one (which I find more interesting because I have no idea of how to do that) is to create a sort of worker and consumer model, so when a thread is finished with step 1 another one takes over and performs step 2 at that object (or something like that). But as said, I don't know if any of these are any good solutions. Also, I don't know much about parallel programming at all. I know how to make a thread, and how to make it perform a function taking in an object as its only parameter, and I've also used the BackgroundWorker-class on one occasion, but I'm not that familiar with any of them. Any input would be appreciated.

    Read the article

  • boost.asio error on read from socket.

    - by niXman
    The following code of the client: typedef boost::array<char, 10> header_packet; header_packet header; boost::system::error_code error; ... /** send header */ boost::asio::write( _socket, boost::asio::buffer(header, header.size()), boost::asio::transfer_all(), error ); /** send body */ boost::asio::write( _socket, boost::asio::buffer(buffer, buffer.length()), boost::asio::transfer_all(), error ); of the server: struct header { boost::uint32_t header_length; boost::uint32_t id; boost::uint32_t body_length; }; static header unpack_header(const header_packet& data) { header hdr; sscanf(data.data(), "%02d%04d%04d", &hdr.header_length, &hdr.id, &hdr.body_length); return hdr; } void connection::start() { boost::asio::async_read( _socket, boost::asio::buffer(_header, _header.size()), boost::bind( &connection::read_header_handler, shared_from_this(), boost::asio::placeholders::error ) ); } /***************************************************************************/ void connection::read_header_handler(const boost::system::error_code& e) { if ( !e ) { std::cout << "readed header: " << _header.c_array() << std::endl; std::cout << constants::unpack_header(_header); boost::asio::async_read( _socket, boost::asio::buffer(_body, constants::unpack_header(_header).body_length), boost::bind( &connection::read_body_handler, shared_from_this(), boost::asio::placeholders::error ) ); } else { /** report error */ std::cout << "read header finished with error: " << e.message() << std::endl; } } /***************************************************************************/ void connection::read_body_handler(const boost::system::error_code& e) { if ( !e ) { std::cout << "readed body: " << _body.c_array() << std::endl; start(); } else { /** report error */ std::cout << "read body finished with error: " << e.message() << std::endl; } } On the server side the method read_header_handler() is called, but the method read_body_handler() is never called. Though the client has written down the data in a socket. The header is readed and decoded successfully. What's the error?

    Read the article

  • Problem separating C++ code in header, inline functions and code.

    - by YuppieNetworking
    Hello all, I have the simplest code that I want to separate in three files: Header file: class and struct declarations. No implementations at all. Inline functions file: implementation of inline methods in header. Code file: normal C++ code for more complicated implementations. When I was about to implement an operator[] method, I couldn't manage to compile it. Here is a minimal example that shows the same problem: Header (myclass.h): #ifndef _MYCLASS_H_ #define _MYCLASS_H_ class MyClass { public: MyClass(const int n); virtual ~MyClass(); double& operator[](const int i); double operator[](const int i) const; void someBigMethod(); private: double* arr; }; #endif /* _MYCLASS_H_ */ Inline functions (myclass-inl.h): #include "myclass.h" inline double& MyClass::operator[](const int i) { return arr[i]; } inline double MyClass::operator[](const int i) const { return arr[i]; } Code (myclass.cpp): #include "myclass.h" #include "myclass-inl.h" #include <iostream> inline MyClass::MyClass(const int n) { arr = new double[n]; } inline MyClass::~MyClass() { delete[] arr; } void MyClass::someBigMethod() { std::cout << "Hello big method that is not inlined" << std::endl; } And finally, a main to test it all: #include "myclass.h" #include <iostream> using namespace std; int main(int argc, char *argv[]) { MyClass m(123); double x = m[1]; m[1] = 1234; cout << "m[1]=" << m[1] << endl; x = x + 1; return 0; } void nothing() { cout << "hello world" << endl; } When I compile it, it says: main.cpp:(.text+0x1b): undefined reference to 'MyClass::MyClass(int)' main.cpp:(.text+0x2f): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x49): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x65): undefined reference to 'MyClass::operator[](int)' However, when I move the main method to the MyClass.cpp file, it works. Could you guys help me spot the problem? Thank you.

    Read the article

  • Across process marhalling problem with an array of points

    - by ElMagn
    Hi All, We have what we think is a marshalling problem with a renderer object when called across process boundaries. The renderer is an ATL COM server with a COM object that implements the IPoints interface defined below: typedef [uuid(B0E01719-005A-427c-B9DD-B42A18E969AE)] struct Point { double X; double Y; } Point; [ object, uuid(3BFECFE3-B4FB-4f14-8257-6E065D02E3B3), helpstring("IPoints Interface"), dual, ] interface IPoints : IDispatch { HRESULT DrawPolyLine([in] long hDC, [in] short count, [in, size_is(count)] Point * points ); // many more like DrawLine } The count parameter represents the number of points and the points parameter represents an array of the actual points. We have two process running, a graphical display process (GDP) and a tabular (grid) display process (TDP). A factory in the GDP, written in C#, creates the renderer and the clients of the renderer in the GDP. When the clients call into the renderer, everything displays correctly. The renderer is created at start up BTW. There is another factory in the TDP, written in VB6, that calls into the factory in the GDP to create the clients. When the clients call into the renderer, only the first point in the array is marshaled correctly, all the other points are garbage. Seems that the rendering works only when the client creation is started from the same process as the renderer. Now, i am not sure what the solution to this problem is. It seems that if we can guarantee that the clients are always created from a thread in the same GDP process as the renderer then the points are marshaled correctly. We tried using a background thread from the Thread Pool in C# and it indeed worked. The problem is that Windows Forms created from the clients stopped working because accessing the form's controls from a thread other than the thread that created the control is not allowed. We might change the calls to access the forms but we have quite a few of them and are trying to look into a different solution that might involve making changes to the renderer. The other problem is that the renderer is legacy code and we can't just change the interface. I am wondering what can we do to the renderer's interface that would help with marshalling from across process calls. Any ideas would be greatly appreciated. Regards, ElMagn

    Read the article

  • C++ - Breaking code implementation into different parts

    - by Kotti
    Hi! The question plot (a bit abstract, but answering this question will help me in my real app): So, I have some abstract superclass for objects that can be rendered on the screen. Let's call it IRenderable. struct IRenderable { // (...) virtual void Render(RenderingInterface& ri) = 0; virtual ~IRenderable() { } }; And suppose I also have some other objects that derive from IRenderable, e.g. Cat and Dog. So far so good. I add some Cat and Dog specific methods, like SeekForWhiskas(...) and Bark(...). After that I add specific Render(...) method for them, so my code looks this way: class Cat : public IRenderable { public: void SeekForWhiskas(...) { // Implementation could be here or moved // to a source file (depends on me wanting // to inline it or not) } virtual void Render(...) { // Here comes the rendering routine, that // is specific for cats SomehowDrawAppropriateCat(...); } }; class Dog : public IRenderable { public: void Bark(...) { // Same as for 'SeekForWhiskas(...)' } virtual void Render(...) { // Here comes the rendering routine, that // is specific for dogs DrawMadDog(...); } }; And then somewhere else I can do drawing the way that an appropriate rendering routine is called: IRenderable* dog = new Dog(); dog->Render(...); My question is about logical wrapping of such kind of code. I want to break apart the code, that corresponds to rendering of the current object and it's own methods (Render and Bark in this example), so that my class implementation doesn't turn into a mess (imagine that I have 10 methods like Bark and of course my Render method doesn't fit in their company and would be hard to find). Two ways of making what I want to (as far as I know) are: Making appropriate routines that look like RenderCat(Cat& cat, RenderInterface* ri), joining them to render namespace and then the functions inside a class would look like virtual void Render(...) { RenderCat(*this, ...); }, but this is plain stupid, because I'll lose access to Cat's private members and friending these functions looks like a total design disaster. Using visitor pattern, but this would also mean I have to rebuild my app's design and looks like an inadequate way to make my code complicated from the very beginning. Any brilliant ideas? :)

    Read the article

  • C++ game designing & polymorphism question

    - by Kotti
    Hi! I'm trying to implement some sort of 'just-for-me' game engine and the problem's plot goes the following way: Suppose I have some abstract interface for a renderable entity, e.g. IRenderable. And it's declared the following way: interface IRenderable { // (...) // Suppose that Backend is some abstract backend used // for rendering, and it's implementation is not important virtual void Render(Backend& backend) = 0; }; What I'm doing right now is something like declaring different classes like class Ball : public IRenderable { virtual void Render(Backend& backend) { // Rendering implementation, that is specific for // the Ball object // (...) } }; And then everything looks fine. I can easily do something like std::vector<IRenderable*> items, push some items like new Ball() in this vector and then make a call similiar to foreach (IRenderable* in items) { item->Render(backend); } Ok, I guess it is the 'polymorphic' way, but what if I want to have different types of objects in my game and an ability to manipulate their state, where every object can be manipulated via it's own interface? I could do something like struct GameState { Ball ball; Bonus bonus; // (...) }; and then easily change objects state via their own methods, like ball.Move(...) or bonus.Activate(...), where Move(...) is specific for only Ball and Activate(...) - for only Bonus instances. But in this case I lose the opportunity to write foreach IRenderable* simply because I store these balls and bonuses as instances of their derived, not base classes. And in this case the rendering procedure turns into a mess like ball.Render(backend); bonus.Render(backend); // (...) and it is bad because we actually lose our polymorphism this way (no actual need for making Render function virtual, etc. The other approach means invoking downcasting via dynamic_cast or something with typeid to determine the type of object you want to manipulate and this looks even worse to me and this also breaks this 'polymorphic' idea. So, my question is - is there some kind of (probably) alternative approach to what I want to do or can my current pattern be somehow modified so that I would actually store IRenderable* for my game objects (so that I can invoke virtual Render method on each of them) while preserving the ability to easily change the state of these objects? Maybe I'm doing something absolutely wrong from the beginning, if so, please point it out :) Thanks in advance!

    Read the article

  • .NET Extension Objects with XSLT -- how to iterate over a collection?

    - by Pandincus
    Help me, Stackoverflow! I have a simple .NET 3.5 console app that reads some data and sends emails. I'm representing the email format in an XSLT stylesheet so that we can easily change the wording of the email without needing to recompile the app. We're using Extension Objects to pass data to the XSLT when we apply the transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:EmailNotification="ext:EmailNotification"> -- this way, we can have statements like: <p> Dear <xsl:value-of select="EmailNotification:get_FullName()" />: </p> The above works fine. I pass the object via code like this (some irrelevant code omitted for brevity): // purely an example structure public struct EmailNotification { public string FullName { get; set; } } // Somewhere in some method ... var notification = new Notification("John Smith"); // ... XsltArgumentList xslArgs = new XsltArgumentList(); xslArgs.AddExtensionObject("ext:EmailNotification", notification); // ... // The part where it breaks! (This is where we do the transformation) xslt.Transform(fakeXMLDocument.CreateNavigator(), xslArgs, XmlWriter.Create(transformedXMLString)); So, all of the above code works. However, I wanted to get a little fancy (always my downfall) and pass a collection, so that I could do something like this: <p>The following accounts need to be verified:</p> <xsl:for-each select="EmailNotification:get_SomeCollection()"> <ul> <li> <xsl:value-of select="@SomeAttribute" /> </li> </ul> <xsl:for-each> When I pass the collection in the extension object and attempt to transform, I get the following error: "Extension function parameters or return values which have Clr type 'String[]' are not supported." or List, or IEnumerable, or whatever I try to pass in. So, my questions are: How can I pass in a collection to my XSLT? What do I put for the xsl:value-of select="" inside the xsl:for-each ? Is what I am trying to do impossible?

    Read the article

  • UNIX pipes on C block on read

    - by Toni Cárdenas
    I'm struggling to implement a shell with pipelines for class. typedef struct { char** cmd; int in[2]; int out[2]; } cmdio; cmdio cmds[MAX_PIPE + 1]; Commands in the pipeline are read and stored in cmds. cmdio[i].in is the pair of file descriptors of the input pipe returned by pipe(). For the first command, which reads from terminal input, it is just {fileno(stdin), -1}. cmdin[i].outis similar for the output pipe/terminal output. cmdio[i].in is the same as cmd[i-1].out. For example: $ ls -l | sort | wc CMD: ls -l IN: 0 -1 OUT: 3 4 CMD: sort IN: 3 4 OUT: 5 6 CMD: wc IN: 5 6 OUT: -1 1 We pass each command to process_command, which does a number of things: for (cmdi = 0; cmds[cmdi].cmd != NULL; cmdi++) { process_command(&cmds[cmdi]); } Now, inside process_command: if (!(pid_fork = fork())) { dup2(cmd->in[0], fileno(stdin)); dup2(cmd->out[1], fileno(stdout)); if (cmd->in[1] >= 0) { if (close(cmd->in[1])) { perror(NULL); } } if (cmd->out[0] >= 0) { if (close(cmd->out[0])) { perror(NULL); } } execvp(cmd->cmd[0], cmd->cmd); exit(-1); } The problem is that reading from the pipe blocks forever: COMMAND $ ls | wc Created pipe, in: 5 out: 6 Foreground pid: 9042, command: ls, Exited, info: 0 [blocked running read() within wc] If, instead of exchanging the process with execvp, I just do this: if (!(pid_fork = fork())) { dup2(cmd->in[0], fileno(stdin)); dup2(cmd->out[1], fileno(stdout)); if (cmd->in[1] >= 0) { if (close(cmd->in[1])) { perror(NULL); } } if (cmd->out[0] >= 0) { if (close(cmd->out[0])) { perror(NULL); } } char buf[6]; read(fileno(stdin), buf, 5); buf[5] = '\0'; printf("%s\n", buf); exit(0); } It happens to work: COMMAND $ cmd1 | cmd2 | cmd3 | cmd4 | cmd5 Pipe creada, in: 11 out: 12 Pipe creada, in: 13 out: 14 Pipe creada, in: 15 out: 16 Pipe creada, in: 17 out: 18 hola! Foreground pid: 9251, command: cmd1, Exited, info: 0 Foreground pid: 9252, command: cmd2, Exited, info: 0 Foreground pid: 9253, command: cmd3, Exited, info: 0 Foreground pid: 9254, command: cmd4, Exited, info: 0 hola! Foreground pid: 9255, command: cmd5, Exited, info: 0 What could be the problem?

    Read the article

  • C++ string sort like a human being?

    - by Walter Nissen
    I would like to sort alphanumeric strings the way a human being would sort them. I.e., "A2" comes before "A10", and "a" certainly comes before "Z"! Is there any way to do with without writing a mini-parser? Ideally it would also put "A1B1" before "A1B10". I see the question "Natural (human alpha-numeric) sort in Microsoft SQL 2005" with a possible answer, but it uses various library functions, as does "Sorting Strings for Humans with IComparer". Below is a test case that currently fails: #include <set> #include <iterator> #include <iostream> #include <vector> #include <cassert> template <typename T> struct LexicographicSort { inline bool operator() (const T& lhs, const T& rhs) const{ std::ostringstream s1,s2; s1 << toLower(lhs); s2 << toLower(rhs); bool less = s1.str() < s2.str(); std::cout<<s1.str()<<" "<<s2.str()<<" "<<less<<"\n"; return less; } inline std::string toLower(const std::string& str) const { std::string newString(""); for (std::string::const_iterator charIt = str.begin(); charIt!=str.end();++charIt) { newString.push_back(std::tolower(*charIt)); } return newString; } }; int main(void) { const std::string reference[5] = {"ab","B","c1","c2","c10"}; std::vector<std::string> referenceStrings(&(reference[0]), &(reference[5])); //Insert in reverse order so we know they get sorted std::set<std::string,LexicographicSort<std::string> > strings(referenceStrings.rbegin(), referenceStrings.rend()); std::cout<<"Items:\n"; std::copy(strings.begin(), strings.end(), std::ostream_iterator<std::string>(std::cout, "\n")); std::vector<std::string> sortedStrings(strings.begin(), strings.end()); assert(sortedStrings == referenceStrings); }

    Read the article

  • Why are static classes considered “classes” and “reference types”?

    - by Timwi
    I’ve been pondering about the C# and CIL type system today and I’ve started to wonder why static classes are considered classes. There are many ways in which they are not really classes: A “normal” class can contain non-static members, a static class can’t. In this respect, a class is more similar to a struct than it is to a static class, and yet structs have a separate name. You can have a reference to an instance of a “normal” class, but not a static class (despite it being considered a “reference type”). In this respect, a class is more similar to an interface than it is to a static class, and yet interfaces have a separate name. The name of a static class can never be used in any place where a type name would normally fit: you can’t declare a variable of this type, you can’t use it as a base type, and you can’t use it as a generic type parameter. In this respect, static classes are somewhat more like namespaces. A “normal” class can implement interfaces. Once again, that makes classes more similar to structs than to static classes. A “normal” class can inherit from another class. It is also bizarre that static classes are considered to derive from System.Object. Although this allows them to “inherit” the static methods Equals and ReferenceEquals, the purpose of that inheritance is questionable as you would call those methods on object anyway. C# even allows you to specify that useless inheritance explicitly on static classes, but not on interfaces or structs, where the implicit derivation from object and System.ValueType, respectively, actually has a purpose. Regarding the subset-of-features argument: Static classes have a subset of the features of classes, but they also have a subset of the features of structs. All of the things that make a class distinct from the other kinds of type, do not seem to apply to static classes. Regarding the typeof argument: Making a static class into a new and different kind of type does not preclude it from being used in typeof. Given the sheer oddity of static classes, and the scarcity of similarities between them and “normal” classes, shouldn’t they have been made into a separate kind of type instead of a special kind of class?

    Read the article

  • template pass by const reference

    - by 7vies
    Hi, I've looked over a few similar questions, but I'm still confused. I'm trying to figure out how to explicitly (not by compiler optimization etc) and C++03-compatible avoid copying of an object when passing it to a template function. Here is my test code: #include <iostream> using namespace std; struct C { C() { cout << "C()" << endl; } C(const C&) { cout << "C(C)" << endl; } ~C() { cout << "~C()" << endl; } }; template<class T> void f(T) { cout << "f<T>" << endl; } template<> void f(C c) { cout << "f<C>" << endl; } // (1) template<> void f(const C& c) { cout << "f<C&>" << endl; } // (2) int main() { C c; f(c); return 0; } (1) accepts the object of type C, and makes a copy. Here is the output: C() C(C) f<C> ~C() ~C() So I've tried to specialize with a const C& parameter (2) to avoid this, but this simply doesn't work (apparently the reason is explained in this question). Well, I could "pass by pointer", but that's kind of ugly. So is there some trick that would allow to do that somehow nicely? EDIT: Oh, probably I wasn't clear. I already have a templated function template<class T> void f(T) {...} But now I want to specialize this function to accept a const& to another object: template<> void f(const SpecificObject&) {...} But it only gets called if I define it as template<> void f(SpecificObject) {...}

    Read the article

  • Basic data alignment question

    - by Broken Logic
    I've been playing around to see how my computer works under the hood. What I'm interested in is seeing is what happens on the stack inside a function. To do this I've written the following toy program: #include <stdio.h> void __cdecl Test1(char a, unsigned long long b, char c) { char c1; unsigned long long b1; char a1; c1 = 'b'; b1 = 4; a1 = 'r'; printf("%d %d - %d - %d %d Total: %d\n", (long)&b1 - (long)&a1, (long)&c1 - (long)&b1, (long)&a - (long)&c1, (long)&b - (long)&a, (long)&c - (long)&b, (long)&c - (long)&a1 ); }; struct TestStruct { char a; unsigned long long b; char c; }; void __cdecl Test2(char a, unsigned long long b, char c) { TestStruct locals; locals.a = 'b'; locals.b = 4; locals.c = 'r'; printf("%d %d - %d - %d %d Total: %d\n", (long)&locals.b - (long)&locals.a, (long)&locals.c - (long)&locals.b, (long)&a - (long)&locals.c, (long)&b - (long)&a, (long)&c - (long)&b, (long)&c - (long)&locals.a ); }; int main() { Test1('f', 0, 'o'); Test2('f', 0, 'o'); return 0; } And this spits out the following: 9 19 - 13 - 4 8 Total: 53 8 8 - 24 - 4 8 Total: 52 The function args are well behaved but as the calling convention is specified, I'd expect this. But the local variables are a bit wonky. My question is, why wouldn't these be the same? The second call seems to produce a more compact and better aligned stack. Looking at the ASM is unenlightening (at least to me), as the variable addresses are still aliased there. So I guess this is really a question about the assembler itself allocates the stack to local variables. I realise that any specific answer is likely to be platform specific. I'm more interested in a general explanation unless this quirk really is platform specific. For the record though, I'm compiling with VS2010 on a 64bit Intel machine.

    Read the article

  • Calling CryptUIWizDigitalSign from .NET on x64

    - by Joe Kuemerle
    I am trying to digitally sign files using the CryptUIWizDigitalSign function from a .NET 2.0 application compiled to AnyCPU. The call works fine when running on x86 but fails on x64, it also works on an x64 OS when compiled to x86. Any idea on how to better marshall or call from x64? The Win32exception returned is "Error encountered during digital signing of the file ..." with a native error code of -2146762749. The relevant portion of the code are: [StructLayout(LayoutKind.Sequential)] public struct CRYPTUI_WIZ_DIGITAL_SIGN_INFO { public Int32 dwSize; public Int32 dwSubjectChoice; [MarshalAs(UnmanagedType.LPWStr)] public string pwszFileName; public Int32 dwSigningCertChoice; public IntPtr pSigningCertContext; [MarshalAs(UnmanagedType.LPWStr)] public string pwszTimestampURL; public Int32 dwAdditionalCertChoice; public IntPtr pSignExtInfo; } [DllImport("Cryptui.dll", CharSet=CharSet.Unicode, SetLastError=true)] public static extern bool CryptUIWizDigitalSign(int dwFlags, IntPtr hwndParent, string pwszWizardTitle, ref CRYPTUI_WIZ_DIGITAL_SIGN_INFO pDigitalSignInfo, ref IntPtr ppSignContext); CRYPTUI_WIZ_DIGITAL_SIGN_INFO digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo.dwSize = Marshal.SizeOf(digitalSignInfo); digitalSignInfo.dwSubjectChoice = 1; digitalSignInfo.dwSigningCertChoice = 1; digitalSignInfo.pSigningCertContext = pSigningCertContext; digitalSignInfo.pwszTimestampURL = timestampUrl; digitalSignInfo.dwAdditionalCertChoice = 0; digitalSignInfo.pSignExtInfo = IntPtr.Zero; digitalSignInfo.pwszFileName = filepath; CryptUIWizDigitalSign(1, IntPtr.Zero, null, ref digitalSignInfo, ref pSignContext)); And here is how the SigningCertContext is retrieved (minus various error handling) public IntPtr GetCertContext(String pfxfilename, String pswd) IntPtr hMemStore = IntPtr.Zero; IntPtr hCertCntxt = IntPtr.Zero; IntPtr pProvInfo = IntPtr.Zero; uint provinfosize = 0; try { byte[] pfxdata = PfxUtility.GetFileBytes(pfxfilename); CRYPT_DATA_BLOB ppfx = new CRYPT_DATA_BLOB(); ppfx.cbData = pfxdata.Length; ppfx.pbData = Marshal.AllocHGlobal(pfxdata.Length); Marshal.Copy(pfxdata, 0, ppfx.pbData, pfxdata.Length); hMemStore = Win32.PFXImportCertStore(ref ppfx, pswd, CRYPT_USER_KEYSET); pswd = null; if (hMemStore != IntPtr.Zero) { Marshal.FreeHGlobal(ppfx.pbData); while ((hCertCntxt = Win32.CertEnumCertificatesInStore(hMemStore, hCertCntxt)) != IntPtr.Zero) { if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, IntPtr.Zero, ref provinfosize)) pProvInfo = Marshal.AllocHGlobal((int)provinfosize); else continue; if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, pProvInfo, ref provinfosize)) break; } } finally { if (pProvInfo != IntPtr.Zero) Marshal.FreeHGlobal(pProvInfo); if (hMemStore != IntPtr.Zero) Win32.CertCloseStore(hMemStore, 0); } return hCertCntxt; }

    Read the article

  • g++ linker error--typeinfo, but not vtable

    - by James
    I know the standard answer for a linker error about missing typeinfo usually also involves vtable and some virtual function that I forgot to actually define. I'm fairly certain that's not the situation this time. Here's the error: UI.o: In function boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::Resource::GroupByState>(boost::shared_ptr<Graphics::Resource::GroupByState> const&, boost::detail::dynamic_cast_tag)': UI.cpp:(.text._ZN5boost10shared_ptrIN8Graphics7Widgets9WidgetSetEEC1INS1_8Resource12GroupByStateEEERKNS0_IT_EENS_6detail16dynamic_cast_tagE[boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::Resource::GroupByState>(boost::shared_ptr<Graphics::Resource::GroupByState> const&, boost::detail::dynamic_cast_tag)]+0x30): undefined reference totypeinfo for Graphics::Widgets::WidgetSet' Running c++filt on the obnoxious mangled name shows that it actually is looking at .boost::shared_ptr::shared_ptr(boost::shared_ptr const&, boost::detail::dynamic_cast_tag) The inheritance hierarchy looks something like class AbstractGroup { typedef boost::shared_ptr<AbstractGroup> Ptr; ... }; class WidgetSet : public AbstractGroup { typedef boost::shared_ptr<WidgetSet> Ptr; ... }; class GroupByState : public AbstractGroup { ... }; Then there's this: class UI : public GroupByState { ... void LoadWidgets( GroupByState::Ptr resource ); }; Then the original implementation: void UI::LoadWidgets( GroupByState::Ptr resource ) { WidgetSet::Ptr tmp( boost::dynamic_pointer_cast< WidgetSet >(resource) ); if( tmp ) { ... } } Stupid error on my part (trying to cast to a sibling class with a shared parent), even if the error is kind of cryptic. Changing to this: void UI::LoadWidgets( AbstractGroup::Ptr resource ) { WidgetSet::Ptr tmp( boost::dynamic_pointer_cast< WidgetSet >(resource) ); if( tmp ) { ... } } (which I'm fairly sure is what I actually meant to be doing) left me with a very similar error: UI.o: In function boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::_Drawer::Group>(boost::shared_ptr<Graphics::_Drawer::Group> const&, boost::detail::dynamic_cast_tag)': UI.cpp:(.text._ZN5boost10shared_ptrIN8Graphics7Widgets9WidgetSetEEC1INS1_7_Drawer5GroupEEERKNS0_IT_EENS_6detail16dynamic_cast_tagE[boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::_Drawer::Group>(boost::shared_ptr<Graphics::_Drawer::Group> const&, boost::detail::dynamic_cast_tag)]+0x30): undefined reference totypeinfo for Graphics::Widgets::WidgetSet' collect2: ld returned 1 exit status dynamic_cast_tag is just an empty struct in boost/shared_ptr.hpp. It's just a guess that boost might have anything at all to do with the error. Passing in a WidgetSet::Ptr totally eliminates the need for a cast, and it builds fine (which is why I think there's more going on than the standard answer for this question). Obviously, I'm trimming away a lot of details that might be important. My next step is to cut it down to the smallest example that fails to build, but I figured I'd try the lazy way out and take a stab on here first. TIA!

    Read the article

  • Having trouble wrapping functions in the linux kernel

    - by Corey Henderson
    I've written a LKM that implements Trusted Path Execution (TPE) into your kernel: https://github.com/cormander/tpe-lkm I run into an occasional kernel OOPS (describe at the end of this question) when I define WRAP_SYSCALLS to 1, and am at my wit's end trying to track it down. A little background: Since the LSM framework doesn't export its symbols, I had to get creative with how I insert the TPE checking into the running kernel. I wrote a find_symbol_address() function that gives me the address of any function I need, and it works very well. I can call functions like this: int (*my_printk)(const char *fmt, ...); my_printk = find_symbol_address("printk"); (*my_printk)("Hello, world!\n"); And it works fine. I use this method to locate the security_file_mmap, security_file_mprotect, and security_bprm_check functions. I then overwrite those functions with an asm jump to my function to do the TPE check. The problem is, the currently loaded LSM will no longer execute the code for it's hook to that function, because it's been totally hijacked. Here is an example of what I do: int tpe_security_bprm_check(struct linux_binprm *bprm) { int ret = 0; if (bprm->file) { ret = tpe_allow_file(bprm->file); if (IS_ERR(ret)) goto out; } #if WRAP_SYSCALLS stop_my_code(&cs_security_bprm_check); ret = cs_security_bprm_check.ptr(bprm); start_my_code(&cs_security_bprm_check); #endif out: return ret; } Notice the section between the #if WRAP_SYSCALLS section (it's defined as 0 by default). If set to 1, the LSM's hook is called because I write the original code back over the asm jump and call that function, but I run into an occasional kernel OOPS with an "invalid opcode": invalid opcode: 0000 [#1] SMP RIP: 0010:[<ffffffff8117b006>] [<ffffffff8117b006>] security_bprm_check+0x6/0x310 I don't know what the issue is. I've tried several different types of locking methods (see the inside of start/stop_my_code for details) to no avail. To trigger the kernel OOPS, write a simple bash while loop that endlessly starts a backgrounded "ls" command. After a minute or so, it'll happen. I'm testing this on a RHEL6 kernel, also works on Ubuntu 10.04 LTS (2.6.32 x86_64). While this method has been the most successful so far, I have tried another method of simply copying the kernel function to a pointer I created with kmalloc but when I try to execute it, I get: kernel tried to execute NX-protected page - exploit attempt? (uid: 0). If anyone can tell me how to kmalloc space and have it marked as executable, that would also help me solve the above problem. Any help is appreciated!

    Read the article

  • stxxl Assertion `it != root_node_.end()' failed

    - by Fabrizio Silvestri
    I am receiving this assertion failed error when trying to insert an element in a stxxl map. The entire assertion error is the following: resCache: /usr/include/stxxl/bits/containers/btree/btree.h:470: std::pair , bool stxxl::btree::btree::insert(const value_type&) [with KeyType = e_my_key, DataType = unsigned int, CompareType = comp_type, unsigned int RawNodeSize = 16384u, unsigned int RawLeafSize = 131072u, PDAllocStrategy = stxxl::SR, stxxl::btree::btree::value_type = std::pair]: Assertion `it != root_node_.end()' failed. Aborted Any idea? Edit: Here's the code fragment void request_handler::handle_request(my_key& query, reply& rep) { c_++; strip(query.content); std::cout << "Received query " << query.content << " by thread " << boost::this_thread::get_id() << ". It is number " << c_ << "\n"; strcpy(element.first.content, query.content); element.second = c_; testcache_.insert(element); STXXL_MSG("Records in map: " << testcache_.size()); } Edit2 here's more details (I omit constants, e.g. MAX_QUERY_LEN) struct comp_type : std::binary_function<my_key, my_key, bool> { bool operator () (const my_key & a, const my_key & b) const { return strncmp(a.content, b.content, MAX_QUERY_LEN) < 0; } static my_key max_value() { return max_key; } static my_key min_value() { return min_key; } }; typedef stxxl::map<my_key, my_data, comp_type> cacheType; cacheType testcache_; request_handler::request_handler() :testcache_(NODE_CACHE_SIZE, LEAF_CACHE_SIZE) { c_ = 0; memset(max_key.content, (std::numeric_limits<unsigned char>::max)(), MAX_QUERY_LEN); memset(min_key.content, (std::numeric_limits<unsigned char>::min)(), MAX_QUERY_LEN); testcache_.enable_prefetching(); STXXL_MSG("Records in map: " << testcache_.size()); }

    Read the article

  • Why do we have reinterpret_cast in C++ when two chained static_cast can do it's job?

    - by Nawaz
    Say I want to cast A* to char* and vice-versa, we have two choices (I mean, many of us think we've two choices, because both seems to work! Hence the confusion!): struct A { int age; char name[128]; }; A a; char *buffer = static_cast<char*>(static_cast<void*>(&a)); //choice 1 char *buffer = reinterpret_cast<char*>(&a); //choice 2 Both work fine. //convert back A *pA = static_cast<A*>(static_cast<void*>(buffer)); //choice 1 A *pA = reinterpret_cast<A*>(buffer); //choice 2 Even this works fine! So why do we have reinterpret_cast in C++ when two chained static_cast can do it's job? Some of you might think this topic is a duplicate of the previous topics such as listed at the bottom of this post, but it's not. Those topics discuss only theoretically, but none of them gives even a single example demonstrating why reintepret_cast is really needed, and two static_cast would surely fail. I agree, one static_cast would fail. But how about two? If the syntax of two chained static_cast looks cumbersome, then we can write a function template to make it more programmer-friendly: template<class To, class From> To any_cast(From v) { return static_cast<To>(static_cast<void*>(v)); } And then we can use this, as: char *buffer = any_cast<char*>(&a); //choice 1 char *buffer = reinterpret_cast<char*>(&a); //choice 2 //convert back A *pA = any_cast<A*>(buffer); //choice 1 A *pA = reinterpret_cast<A*>(buffer); //choice 2 Also, see this situation where any_cast can be useful: Proper casting for fstream read and write member functions. So my question basically is, Why do we have reinterpret_cast in C++? Please show me even a single example where two chained static_cast would surely fail to do the same job? Which cast to use; static_cast or reinterpret_cast? Cast from Void* to TYPE* : static_cast or reinterpret_cast

    Read the article

  • Drawing with element array in OpenGL ES

    - by FatalMojo
    Hello! I am trying to use OpenGLES to draw a x by y matrix of squares about an arbitrary point. I have an array sideVertice[] that holds a series of vertex structs defined as such typedef struct { GLfloat x; GLfloat y; GLfloat z; } Vertex3D; and an element array defined as such GLubyte elementArray[]; my draw loop is as such glLoadIdentity(); glVertexPointer(3, GL_FLOAT, 0, cube.sideVertice); for (int i=0; i<((cube.cubeSize + 1)*(cube.cubeSize + 1)); i++) { for (int j=0; j<=3; j++) { elementArray[j] = j + i*4; glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, elementArray); } } for (int i=0; i<=3; i++) elementArray[i] = i; However, the visual output is corrupted and I cannot figure out what the problem is. here is an output of the vertice held in the array: 2010-04-15 23:44:48.816 RubixGL[4203:20b] vertex[0][0] x:-26.000000 y:1.000000 2010-04-15 23:44:48.817 RubixGL[4203:20b] vertex[1][1] x:-26.000000 y:26.000000 2010-04-15 23:44:48.826 RubixGL[4203:20b] vertex[2][2] x:-1.000000 y:1.000000 2010-04-15 23:44:48.829 RubixGL[4203:20b] vertex[3][3] x:-1.000000 y:26.000000 2010-04-15 23:44:48.830 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.830 RubixGL[4203:20b] vertex[0][4] x:1.000000 y:1.000000 2010-04-15 23:44:48.832 RubixGL[4203:20b] vertex[1][5] x:1.000000 y:26.000000 2010-04-15 23:44:48.837 RubixGL[4203:20b] vertex[2][6] x:26.000000 y:1.000000 2010-04-15 23:44:48.838 RubixGL[4203:20b] vertex[3][7] x:26.000000 y:26.000000 2010-04-15 23:44:48.848 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.849 RubixGL[4203:20b] vertex[0][8] x:-26.000000 y:-26.000000 2010-04-15 23:44:48.850 RubixGL[4203:20b] vertex[1][9] x:-26.000000 y:-1.000000 2010-04-15 23:44:48.851 RubixGL[4203:20b] vertex[2][10] x:-1.000000 y:-26.000000 2010-04-15 23:44:48.852 RubixGL[4203:20b] vertex[3][11] x:-1.000000 y:-1.000000 2010-04-15 23:44:48.853 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.853 RubixGL[4203:20b] vertex[0][12] x:1.000000 y:-26.000000 2010-04-15 23:44:48.854 RubixGL[4203:20b] vertex[1][13] x:1.000000 y:-1.000000 2010-04-15 23:44:48.854 RubixGL[4203:20b] vertex[2][14] x:26.000000 y:-26.000000 2010-04-15 23:44:48.855 RubixGL[4203:20b] vertex[3][15] x:26.000000 y:-1.000000 any ideas?

    Read the article

  • overloading new/delete problem

    - by hidayat
    This is my scenario, Im trying to overload new and delete globally. I have written my allocator class in a file called allocator.h. And what I am trying to achieve is that if a file is including this header file, my version of new and delete should be used. So in a header file "allocator.h" i have declared the two functions extern void* operator new(std::size_t size); extern void operator delete(void *p, std::size_t size); I the same header file I have a class that does all the allocator stuff, class SmallObjAllocator { ... }; I want to call this class from the new and delete functions and I would like the class to be static, so I have done this: template<unsigned dummy> struct My_SmallObjectAllocatorImpl { static SmallObjAllocator myAlloc; }; template<unsigned dummy> SmallObjAllocator My_SmallObjectAllocatorImpl<dummy>::myAlloc(DEFAULT_CHUNK_SIZE, MAX_OBJ_SIZE); typedef My_SmallObjectAllocatorImpl<0> My_SmallObjectAllocator; and in the cpp file it looks like this: allocator.cc void* operator new(std::size_t size) { std::cout << "using my new" << std::endl; if(size > MAX_OBJ_SIZE) return malloc(size); else return My_SmallObjectAllocator::myAlloc.allocate(size); } void operator delete(void *p, std::size_t size) { if(size > MAX_OBJ_SIZE) free(p); else My_SmallObjectAllocator::myAlloc.deallocate(p, size); } The problem is when I try to call the constructor for the class SmallObjAllocator which is a static object. For some reason the compiler are calling my overloaded function new when initializing it. So it then tries to use My_SmallObjectAllocator::myAlloc.deallocate(p, size); which is not defined so the program crashes. So why are the compiler calling new when I define a static object? and how can I solve it?

    Read the article

  • C++ vector and segmentation faults

    - by Headspin
    I am working on a simple mathematical parser. Something that just reads number = 1 + 2; I have a vector containing these tokens. They store a type and string value of the character. I am trying to step through the vector to build an AST of these tokens, and I keep getting segmentation faults, even when I am under the impression my code should prevent this from happening. Here is the bit of code that builds the AST: struct ASTGen { const vector<Token> &Tokens; unsigned int size, pointer; ASTGen(const vector<Token> &t) : Tokens(t), pointer(0) { size = Tokens.size() - 1; } unsigned int next() { return pointer + 1; } Node* Statement() { if(next() <= size) { switch(Tokens[next()].type) { case EQUALS : Node* n = Assignment_Expr(); return n; } } advance(); } void advance() { if(next() <= size) ++pointer; } Node* Assignment_Expr() { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } Node* Expression() { if(next() <= size) { advance(); if(Tokens[next()].type == SEMICOLON) { Node* n = new Node(Tokens[pointer], NULL, NULL); return n; } if(Tokens[next()].type == PLUS) { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } } } }; ... ASTGen AST(Tokens); Node* Tree = AST.Statement(); cout << Tree->Right->Data.svalue << endl; I can access Tree->Data.svalue and get the = Node's token info, so I know that node is getting spawned, and I can also get Tree->Left->Data.svalue and get the variable to the left of the = I have re-written it many times trying out different methods for stepping through the vector, but I always get a segmentation fault when I try to access the = right node (which should be the + node) Any help would be greatly appreciated.

    Read the article

  • reverse a linked list?

    - by sil3nt
    Hi there, Im trying to reverse the order of the following linked list, I've done so, But the reversed list does not seem to print out. Where have I gone wrong? //reverse the linked list #include <iostream> using namespace std; struct node{ int number; node *next; }; node *A; void addNode(node *&listpointer, int num){ node *temp; temp = new node; temp->number = num; temp->next = listpointer; listpointer = temp; } void reverseNode(node *&listpointer){ node *temp,*current; current = listpointer; temp = new node; while (true){ if (current == NULL){ temp = NULL; break; } temp->number = current->number; current = current->next; temp = temp->next; } listpointer = temp; } int main(){ A = NULL; addNode(A,1); addNode(A,2); addNode(A,3); while (true){ if (A == NULL){break;} cout<< A->number << endl; A = A->next; } cout<< "****" << endl; reverseNode(A); while (true){ if (A == NULL){break;} cout<< A->number << endl; A = A->next; } cout<< "****"<< endl; return 0; }

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >