Search Results

Search found 1621 results on 65 pages for 'cout'.

Page 53/65 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Linked List manipulation, issues retrieving data c++

    - by floatfil
    I'm trying to implement some functions to manipulate a linked list. The implementation is a template typename T and the class is 'List' which includes a 'head' pointer and also a struct: struct Node { // the node in a linked list T* data; // pointer to actual data, operations in T Node* next; // pointer to a Node }; Since it is a template, and 'T' can be any data, how do I go about checking the data of a list to see if it matches the data input into the function? The function is called 'retrieve' and takes two parameters, the data and a pointer: bool retrieve(T target, T*& ptr); // This is the prototype we need to use for the project "bool retrieve : similar to remove, but not removed from list. If there are duplicates in the list, the first one encountered is retrieved. Second parameter is unreliable if return value is false. E.g., " Employee target("duck", "donald"); success = company1.retrieve(target, oneEmployee); if (success) { cout << "Found in list: " << *oneEmployee << endl; } And the function is called like this: company4.retrieve(emp3, oneEmployee) So that when you cout *oneEmployee, you'll get the data of that pointer (in this case the data is of type Employee). (Also, this is assuming all data types have the apropriate overloaded operators) I hope this makes sense so far, but my issue is in comparing the data in the parameter and the data while going through the list. (The data types that we use all include overloads for equality operators, so oneData == twoData is valid) This is what I have so far: template <typename T> bool List<T>::retrieve(T target , T*& ptr) { List<T>::Node* dummyPtr = head; // point dummy pointer to what the list's head points to for(;;) { if (*dummyPtr->data == target) { // EDIT: it now compiles, but it breaks here and I get an Access Violation error. ptr = dummyPtr->data; // set the parameter pointer to the dummy pointer return true; // return true } else { dummyPtr = dummyPtr->next; // else, move to the next data node } } return false; } Here is the implementation for the Employee class: //-------------------------- constructor ----------------------------------- Employee::Employee(string last, string first, int id, int sal) { idNumber = (id >= 0 && id <= MAXID? id : -1); salary = (sal >= 0 ? sal : -1); lastName = last; firstName = first; } //-------------------------- destructor ------------------------------------ // Needed so that memory for strings is properly deallocated Employee::~Employee() { } //---------------------- copy constructor ----------------------------------- Employee::Employee(const Employee& E) { lastName = E.lastName; firstName = E.firstName; idNumber = E.idNumber; salary = E.salary; } //-------------------------- operator= --------------------------------------- Employee& Employee::operator=(const Employee& E) { if (&E != this) { idNumber = E.idNumber; salary = E.salary; lastName = E.lastName; firstName = E.firstName; } return *this; } //----------------------------- setData ------------------------------------ // set data from file bool Employee::setData(ifstream& inFile) { inFile >> lastName >> firstName >> idNumber >> salary; return idNumber >= 0 && idNumber <= MAXID && salary >= 0; } //------------------------------- < ---------------------------------------- // < defined by value of name bool Employee::operator<(const Employee& E) const { return lastName < E.lastName || (lastName == E.lastName && firstName < E.firstName); } //------------------------------- <= ---------------------------------------- // < defined by value of inamedNumber bool Employee::operator<=(const Employee& E) const { return *this < E || *this == E; } //------------------------------- > ---------------------------------------- // > defined by value of name bool Employee::operator>(const Employee& E) const { return lastName > E.lastName || (lastName == E.lastName && firstName > E.firstName); } //------------------------------- >= ---------------------------------------- // < defined by value of name bool Employee::operator>=(const Employee& E) const { return *this > E || *this == E; } //----------------- operator == (equality) ---------------- // if name of calling and passed object are equal, // return true, otherwise false // bool Employee::operator==(const Employee& E) const { return lastName == E.lastName && firstName == E.firstName; } //----------------- operator != (inequality) ---------------- // return opposite value of operator== bool Employee::operator!=(const Employee& E) const { return !(*this == E); } //------------------------------- << --------------------------------------- // display Employee object ostream& operator<<(ostream& output, const Employee& E) { output << setw(4) << E.idNumber << setw(7) << E.salary << " " << E.lastName << " " << E.firstName << endl; return output; } I will include a check for NULL pointer but I just want to get this working and will test it on a list that includes the data I am checking. Thanks to whoever can help and as usual, this is for a course so I don't expect or want the answer, but any tips as to what might be going wrong will help immensely!

    Read the article

  • GLSL subroutine not being used

    - by amoffat
    I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time: #version 420 subroutine vec2 sample_coord_type(int i); subroutine uniform sample_coord_type sample_coord; in vec2 texcoord; out vec3 color; uniform sampler2D tex; uniform int texture_size; const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}}); const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}}); subroutine(sample_coord_type) vec2 vertical_coord(int i) { return vec2(0.0, offsets[i] / texture_size); } subroutine(sample_coord_type) vec2 horizontal_coord(int i) { //return vec2(offsets[i] / texture_size, 0.0); return vec2(0.0, 0.0); // just for testing if this subroutine gets used } void main(void) { color = vec3(0.0); for (int i=0; i<{{NUM_SAMPLES}}; i++) { color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i]; color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i]; } } Here is my code for selecting the subroutine: blur_program->start(); blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER); blur_program->set_int("texture_size", width); blur_program->set_texture("tex", *deferred_output); blur_program->draw(); // draws a quad for the fragment shader to run on and: void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) { GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str()); GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str()); glUniformSubroutinesuiv(target, 1, &routine_index); // debugging int num_subs; glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs); std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n"; } I've checked for errors, and there are none. When I pass in vertical_coord as the routine to use, my scene is blurred vertically, as it should be. The routine_index variable is also 1 (which is weird, because vertical_coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal_coord, my scene is STILL blurred vertically, even though the value of routine_index is 0, suggesting that a different subroutine is being used. Yet the horizontal_coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical_coord comes first, so the shader blurs vertically always. If I put horizontal_coord first, the scene is unblurred, as expected, but then I cannot select the vertical_coord subroutine! :) Also, the value of num_subs is 2, suggesting that there are 2 subroutines compatible with my sample_coord subroutine uniform. Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Numbers not adding up? (What am I not understanding here?) [closed]

    - by Milo
    I have the following output: Short version: The last numbers on the S= lines increase by H and SHOULD theoretically be linearly decreasing, ex: -285,-290,-295...but the fourth one jumps to -252. Yet, every other number is linearly increasing. Why is that and how could I fix that? To explain the numbers, it comes from slider value changed. I have a slider whose value is used to generate the float on the next line. Everything should be growing linearly here. This value is used to determine the size of a flow layout and it is also used in conjunction with a scrollbar. But basically I have a background for the flow layout and that number is the start location for rendering it. The numbers should linearly change to create a smooth transition but when that one jumps, it looks weird on screen and I dont understand why the numbers are jumping every X slider value changes. Mathematically what could be causing this? Here is the code for rendering the background and the function that is called when value changes: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; int startY = childRect.getY(); int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); m_vScroll->setValue(newVal); } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { m_flow->setLocation(0,-val); } Any insight on mathematically why the anomaly might happen every Nth time would be helpful. I just dont understand why if every number linearly increates it jumps from -295 to -252! Thanks

    Read the article

  • Getting Wrong Answer in range maximum query [on hold]

    - by user3186829
    I've just learnt range minimum and maximum queries using segment trees.But when I implemented it on my own I'm getting wrong answer.Logically I don't find any mistake in my code but if any one can point it out then I would be really thankful. Code in C++: #include<bits/stdc++.h> using namespace std; #define LL long long #define mp make_pair #define pb push_back #define gc getchar_unlocked #define pc putchar_unlocked #define LD long double #define MAXN 19999999 #define max(a,b) ((a)>(b)?(a):(b)) LL P[MAXN+15]; LL ST[2*MAXN+25]; long N,M,i,A,B,K; void build(long id,long L,long R) { long M=(L+R)>>1L; long LCT=id<<1L; long RCT=LCT+1L; if(L==R) { ST[id]=P[L]; return; } build(LCT,L,M); build(RCT,M+1,R); } //Range Update of segment tree void updateST(long id,long L,long R,long Q1,long Q2,long val) { long M=(L+R)>>1L; long LCT=id<<1L; long RCT=LCT+1L; if(L>Q2||R<Q1) { return; } if(L==Q1&&R==Q2) { ST[id]+=val; return; } if(Q2<=M) { updateST(LCT,L,M,Q1,Q2,val); } else if(Q1>M) { updateST(RCT,M+1,R,Q1,Q2,val); } else { updateST(LCT,L,M,Q1,M,val); updateST(RCT,M+1,R,M+1,Q2,val); } } //Query for finding maximum element in a given range[Q1,Q2] and 1<=Q1,Q2<=N LL query2(long id,long L,long R,long Q1,long Q2) { long M=(L+R)>>1; long LCT=id<<1; long RCT=LCT+1; if(L>Q2||R<Q1) { return 0; } if(L==Q1&&R==Q2) { return ST[id]; } if(Q2<=M) { return query2(LCT,L,M,Q1,Q2); } else if(Q1>M) { return query2(RCT,M+1,R,Q1,Q2); } else { LL G=query2(LCT,L,M,Q1,M); LL H=query2(RCT,M+1,R,M+1,Q2); LL RES=max(G,H); return RES; } } int main() { scanf("%ld %ld",&N,&M); build(1,1,N); for(i=0;i<M;i++) { scanf("%ld %ld %ld",&A,&B,&K); updateST(1,1,N,A,B,K); } //Finding maximum element in range[1,N]] cout<<query2(1,1,N,1,N); return 0; }

    Read the article

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • Exporting makefile from Eclipse CDT

    - by Alex Farber
    I have C++ project in the Ubuntu OS, Eclipse CDT. My final goal is to build the project binaries for FreeBSD OS. The first test. I create simple C++ CDT project with main.cpp file: cout << "OK" << endl; and build it. Then I open Terminal window in Release directory: alex@alex-linux:~/workspace/HelloWorld/Release$ ls HelloWorld main.d main.o makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ rm HelloWorld main.d main.o alex@alex-linux:~/workspace/HelloWorld/Release$ ls makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ make Building file: ../main.cpp Invoking: GCC C++ Compiler g++ -O3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" Finished building: ../main.cpp Building target: HelloWorld Invoking: GCC C++ Linker g++ -o"HelloWorld" ./main.o Finished building target: HelloWorld alex@alex-linux:~/workspace/HelloWorld/Release$ ./HelloWorld OK alex@alex-linux:~/workspace/HelloWorld/Release$ So far, so good. Now I copy the whole project tree to FreeBSD and trying to build it: $ cd /home/alex/project $ ls main.cpp release $ cd release $ ls makefile objects.mk sources.mk subdir.mk $ make "makefile", line 5: Need an operator "makefile", line 10: Need an operator "makefile", line 11: Need an operator "makefile", line 12: Need an operator CDT-generated makefile doesn't work. This is makefile beginning: $ Automatically-generated file. Do not edit! -include ../makefile.init RM := rm -rf $ All of the sources participating in the build are defined here -include sources.mk -include subdir.mk -include objects.mk ... Line 5 is -include ../makefile.init. Really, there is no such file. But it works by some way on Ubuntu computer. What is the trick, how can I build this? BTW, manually written makefile works: all: g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" g++ -o"HelloWorld" ./main.o Note: $ in makefile is actually #, I replaced it because # creates formatting problems inside of stackoverflow pre block.

    Read the article

  • Mixing Objective-C and C++

    - by helixed
    Hello, I'm trying to mix together some Objective-C code with C++. I've always heard it was possible, but I've never actually tried it before. When I try to compile the code, I get a bunch of errors. Here's a simple example I've created which illustrates my problems: AView.h #import <Cocoa/Cocoa.h> #include "B.h" @interface AView : NSView { B *b; } -(void) setB: (B *) theB; @end AView.m #import "AView.h" @implementation AView - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. } return self; } - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. } -(void) setB: (B *) theB { b = theB; } @end B.h #include <iostream> class B { B() { std::cout << "Hello from C++"; } }; Here's the list of errors I get when I try to compile this: /Users/helixed/Desktop/Example/B.h:1:0 /Users/helixed/Desktop/Example/B.h:1:20: error: iostream: No such file or directory /Users/helixed/Desktop/Example/B.h:3:0 /Users/helixed/Desktop/Example/B.h:3: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'B' /Users/helixed/Desktop/Example/AView.h:5:0 /Users/helixed/Desktop/Example/AView.h:5: error: expected specifier-qualifier-list before 'B' /Users/helixed/Desktop/Example/AView.h:8:0 /Users/helixed/Desktop/Example/AView.h:8: error: expected ')' before 'B' /Users/helixed/Desktop/Example/AView.m:26:0 /Users/helixed/Desktop/Example/AView.m:26: error: expected ')' before 'B' /Users/helixed/Desktop/Example/AView.m:27:0 /Users/helixed/Desktop/Example/AView.m:27: error: 'b' undeclared (first use in this function) All I'm doing to compile this right now is using the default compiler built into Xcode. I didn't edit the Cocoa Application template in any way other than adding the two files I created and chaing the NSView to AView in the xib file. Could somebody please tell me what I'm doing wrong? Thanks, helixed

    Read the article

  • Mixing Objective-C and C++

    - by helixed
    I'm trying to mix together some Objective-C code with C++. I've always heard it was possible, but I've never actually tried it before. When I try to compile the code, I get a bunch of errors. Here's a simple example I've created which illustrates my problems: AView.h #import <Cocoa/Cocoa.h> #include "B.h" @interface AView : NSView { B *b; } -(void) setB: (B *) theB; @end AView.m #import "AView.h" @implementation AView - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. } return self; } - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. } -(void) setB: (B *) theB { b = theB; } @end B.h #include <iostream> class B { B() { std::cout << "Hello from C++"; } }; Here's the list of errors I get when I try to compile this: /Users/helixed/Desktop/Example/B.h:1:0 /Users/helixed/Desktop/Example/B.h:1:20: error: iostream: No such file or directory /Users/helixed/Desktop/Example/B.h:3:0 /Users/helixed/Desktop/Example/B.h:3: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'B' /Users/helixed/Desktop/Example/AView.h:5:0 /Users/helixed/Desktop/Example/AView.h:5: error: expected specifier-qualifier-list before 'B' /Users/helixed/Desktop/Example/AView.h:8:0 /Users/helixed/Desktop/Example/AView.h:8: error: expected ')' before 'B' /Users/helixed/Desktop/Example/AView.m:26:0 /Users/helixed/Desktop/Example/AView.m:26: error: expected ')' before 'B' /Users/helixed/Desktop/Example/AView.m:27:0 /Users/helixed/Desktop/Example/AView.m:27: error: 'b' undeclared (first use in this function) All I'm doing to compile this right now is using the default compiler built into Xcode. I didn't edit the Cocoa Application template in any way other than adding the two files I created and chaing the NSView to AView in the xib file. Could somebody please tell me what I'm doing wrong? Thanks, helixed

    Read the article

  • rand() generating the same number – even with srand(time(NULL)) in my main!

    - by Nick Sweet
    So, I'm trying to create a random vector (think geometry, not an expandable array), and every time I call my random vector function I get the same x value, thought y and z are different. int main () { srand ( (unsigned)time(NULL)); Vector<double> a; a.randvec(); cout << a << endl; return 0; } using the function //random Vector template <class T> void Vector<T>::randvec() { const int min=-10, max=10; int randx, randy, randz; const int bucket_size = RAND_MAX/(max-min); do randx = (rand()/bucket_size)+min; while (randx <= min && randx >= max); x = randx; do randy = (rand()/bucket_size)+min; while (randy <= min && randy >= max); y = randy; do randz = (rand()/bucket_size)+min; while (randz <= min && randz >= max); z = randz; } For some reason, randx will consistently return 8, whereas the other numbers seem to be following the (pseudo) randomness perfectly. However, if I put the call to define, say, randy before randx, randy will always return 8. Why is my first random number always 8? Am I seeding incorrectly?

    Read the article

  • C++ question: boost::bind receive other boost::bind

    - by user355034
    I want to make this code work properly, what should I do? giving this error on the last line. what am I doing wrong? i know boost::bind need a type but i'm not getting. help class A { public: template <class Handle> void bindA(Handle h) { h(1, 2); } }; class B { public: void bindB(int number, int number2) { std::cout << "1 " << number << "2 " << number2 << std::endl; } }; template struct Wrap_ { Wrap_(Han h) : h_(h) {} template<typename Arg1, typename Arg2> void operator()(Arg1 arg1, Arg2 arg2) { h_(arg1, arg2); } Han h_; }; template inline Wrap_<Handler> make(Handler h) { return Wrap_<Handler> (h); } int main() { A a; B b; ((boost::bind)(&B::bindB, b, _1, _2))(1, 2); ((boost::bind)(&A::bindA, a, make(boost::bind(&B::bindB, b, _1, _2))))(); /i want compiled success and execute success this code/ }

    Read the article

  • Should I redesign my code when my colleague says so?

    - by Kirill V. Lyadvinsky
    I wrote a function recently (with help of one guy from SO) that finds maximum of two ints. Here is the code: long get_max (long(*a)(long(*)(long(*)()),long(*)(long(*)(long**))), long(*b)(long(*) (long(*)()),long*,long(*)(long(*)()))){return (long)((((long(*)(long(*)(long(*)()),long( *)(long(*)())))a)> ((long(*)(long(*)(long(*)()),long(*)(long(*)())))b))?((long(*)( long(*)(long(*)()),long(*)(long(*)())))a):((long(*)(long(*)(long(*)()),long(*)(long(*)( ))))b));} int main() { long x = get_max( (long(*)(long(*)(long(*)()),long(*)(long(*)(long**)))) 500, (long(*)(long(*)(long(*)()),long*,long(*)(long(*)()))) 100 ); cout << x << endl; // print 500 as expected return 0; } It works fine, but my colleague says that I shouldn't use C style casts. But I think that all that modern static_cast's and reinterpret_cast's will make my code too cumbersome. Who's right? Should I redesign my code using C++ style casts or is original code OK? EDIT: For those who marks this question as not a question I'll try to be more clear: should I use C++ style cast instead of C style cast in the code above?

    Read the article

  • Tail-recursive merge sort in OCaml

    - by CFP
    Hello world! I’m trying to implement a tail-recursive list-sorting function in OCaml, and I’ve come up with the following code: let tailrec_merge_sort l = let split l = let rec _split source left right = match source with | [] -> (left, right) | head :: tail -> _split tail right (head :: left) in _split l [] [] in let merge l1 l2 = let rec _merge l1 l2 result = match l1, l2 with | [], [] -> result | [], h :: t | h :: t, [] -> _merge [] t (h :: result) | h1 :: t1, h2 :: t2 -> if h1 < h2 then _merge t1 l2 (h1 :: result) else _merge l1 t2 (h2 :: result) in List.rev (_merge l1 l2 []) in let rec sort = function | [] -> [] | [a] -> [a] | list -> let left, right = split list in merge (sort left) (sort right) in sort l ;; Yet it seems that it is not actually tail-recursive, since I encounter a "Stack overflow during evaluation (looping recursion?)" error. Could you please help me spot the non tail-recursive call in this code? I've searched quite a lot, without finding it. Cout it be the let binding in the sort function? Thanks a lot, CFP.

    Read the article

  • Backtracking Problem

    - by Joshua Green
    Could someone please guide me through backtracking in Prolog-C using the simple example below? Here is my Prolog file: likes( john, mary ). likes( john, emma ). likes( john, ashley ). Here is my C file: #include... term_t tx; term_t tv; term_t goal_term; functor_t goal_functor; int main( int argc, char** argv ) { argv[0] = "libpl.dll"; PL_initialise( argc, argv ); PlCall( "consult( swi( 'plwin.rc' ) )" ); PlCall( "consult( 'likes.pl' )" ); tv = PL_new_term_ref( ); PL_put_atom_chars( tv, "john" ); tx = PL_new_term_ref( ); goal_term = PL_new_term_ref( ); goal_functor = PL_new_functor( PL_new_atom( "likes" ), 2 ); PL_cons_functor( goal_term, goal_functor, tv, tx ); PlQuery q( "likes", ??? ); while ( q.next_solution( ) ) { char* solution; PL_get_atom_chars( tx, &solution ); cout << solution << endl; } PL_halt( PL_toplevel() ? 0 : 1 ); } What should I replace ??? with? Or is this the right approach to get all the backtracking results generated and printed? Thank you,

    Read the article

  • Need help modifying C++ application to accept continuous piped input in Linux

    - by GreeenGuru
    The goal is to mine packet headers for URLs visited using tcpdump. So far, I can save a packet header to a file using: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | tee -a ./Desktop/packets.txt And I've written a program to parse through the header and extract the URL when given the following command: cat ~/Desktop/packets.txt | ./packet-parser.exe But what I want to be able to do is pipe tcpdump directly into my program, which will then log the data: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | ./packet-parser.exe Here is the script as it is. The question is: how do I need to change it to support continuous input from tcpdump? #include <boost/regex.hpp> #include <fstream> #include <cstdio> // Needed to define ios::app #include <string> #include <iostream> int main() { // Make sure to open the file in append mode std::ofstream file_out("/var/local/GreeenLogger/url.log", std::ios::app); if (not file_out) std::perror("/var/local/GreeenLogger/url.log"); else { std::string text; // Get multiple lines of input -- raw std::getline(std::cin, text, '\0'); const boost::regex pattern("GET (\\S+) HTTP.*?[\\r\\n]+Host: (\\S+)"); boost::smatch match_object; bool match = boost::regex_search(text, match_object, pattern); if(match) { std::string output; output = match_object[2] + match_object[1]; file_out << output << '\n'; std::cout << output << std::endl; } file_out.close(); } } Thank you ahead of time for the help!

    Read the article

  • Help with boost::lambda expression

    - by Venkat Shivanandan
    I tried to write a function that calculates a hamming distance between two codewords using the boost lambda library. I have the following code: #include <iostream> #include <numeric> #include <boost/function.hpp> #include <boost/lambda/lambda.hpp> #include <boost/lambda/if.hpp> #include <boost/bind.hpp> #include <boost/array.hpp> template<typename Container> int hammingDistance(Container & a, Container & b) { return std::inner_product( a.begin(), a.end(), b.begin(), (_1 + _2), boost::lambda::if_then_else_return(_1 != _2, 1, 0) ); } int main() { boost::array<int, 3> a = {1, 0, 1}, b = {0, 1, 1}; std::cout << hammingDistance(a, b) << std::endl; } And the error I am getting is: HammingDistance.cpp: In function ‘int hammingDistance(Container&, Container&)’: HammingDistance.cpp:15: error: no match for ‘operator+’ in ‘<unnamed>::_1 + <unnamed>::_2’ HammingDistance.cpp:17: error: no match for ‘operator!=’ in ‘<unnamed>::_1 != <unnamed>::_2’ /usr/include/c++/4.3/boost/function/function_base.hpp:757: note: candidates are: bool boost::operator!=(boost::detail::function::useless_clear_type*, const boost::function_base&) /usr/include/c++/4.3/boost/function/function_base.hpp:745: note: bool boost::operator!=(const boost::function_base&, boost::detail::function::useless_clear_type*) This is the first time I am playing with boost lambda. Please tell me where I am going wrong. Thanks.

    Read the article

  • Beginner question - Loop invariants (Specifically Ch.3 of "Accelerated C++")

    - by Owen
    Hi - as I said, a complete beginner question here. I'm currently working my way through "Accelerated C++" and just came across this in chapter 3: // invariant: // we have read count grades so far, and // sum is the sum of the first count grades while (cin >> x) { ++count; sum += x; } The authors follow this by explaining that the invariant needs special attention paid to it because when the input is read into the variable x, we will have read count+1 grades and thus the invariant will be untrue. Similarly, when we have incremented the counter, the variable sum will no longer be the sum of the last count grades (in case you hadn't guessed, it's the traditional program for calculating student marks). What I don't understand is why this matters. Surely for just about any other loop, a similar statement would be true? For example, here is the book's first while loop (the output is filled in later): // invariant: we have written r rows so far while (r != rows) { // write a row of output std::cout << std::endl; ++r; } Once we have written the appropriate row of output, surely the invariant is false until we have incremented r, just as in the other example? It's probably something really obvious, anyone could enlighten me as to what makes these two cases different, that'd be great - and thanks in advance for taking the time to answer such a complete novice question. Owen

    Read the article

  • operator new for array of class without default constructor......

    - by skydoor
    For a class without default constructor, operator new and placement new can be used to declare an array of such class. When I read the code in More Effective C++, I found the code as below(I modified some part)..... My question is, why [] after the operator new is needed? I test it without it, it still works. Can any body explain that? class A { public: int i; A(int i):i(i) {} }; int main() { void *rawMemory = operator new[] (10 * sizeof(A)); // Why [] needed here? A *p = static_cast<A*>(rawMemory); for(int i = 0 ; i < 10 ; i++ ) { new(&p[i])A(i); } for(int i = 0 ; i < 10 ; i++ ) { cout<<p[i].i<<endl; } for(int i = 0 ; i < 10 ; i++ ) { p[i].~A(); } return 0; }

    Read the article

  • Binding a member signal to a function

    - by the_drow
    This line of code compiles correctly without a problem: boost::bind(boost::ref(connected_), boost::dynamic_pointer_cast<session<version> >(shared_from_this()), boost::asio::placeholders::error); However when assigning it to a boost::function or as a callback like this: socket_->async_connect(connection_->remote_endpoint(), boost::bind(boost::ref(connected_), boost::dynamic_pointer_cast<session<version> >(shared_from_this()), boost::asio::placeholders::error)); I'm getting a whole bunch of incomprehensible errors (linked since it's too long to fit here). On the other hand I have succeeded binding a free signal to a boost::function like this: void print(const boost::system::error_code& error) { cout << "session connected"; } int main() { boost::signal<void(const boost::system::error_code &)> connected_; connected_.connect(boost::bind(&print, boost::asio::placeholders::error)); client<>::connection_t::socket_ptr socket_(new client<>::connection_t::socket_t(conn->service())); // shared_ptr of a tcp socket socket_->async_connect(conn->remote_endpoint(), boost::bind(boost::ref(connected_), boost::asio::placeholders::error)); conn->service().run(); // io_service.run() return 0; } This works and prints session connected correctly. What am I doing wrong here?

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • How to find string in a string

    - by owca
    I somehow need to find the longest string in other string, so if string1 will be "Alibaba" and string2 will be "ba" , the longest string will be "baba". I have the lengths of strings, but what next ? char* fun(char* a, char& b) { int length1=0; int length2=0; int longer; int shorter; char end='\0'; while(a[i] != tmp) { i++; length1++; } int i=0; while(b[i] != tmp) { i++; length++; } if(dlug1 > dlug2){ longer = length1; shorter = length2; } else{ longer = length2; shorter = length1; } //logics here } int main() { char name1[] = "Alibaba"; char name2[] = "ba"; char &oname = *name2; cout << fun(name1, oname) << endl; system("PAUSE"); return 0; }

    Read the article

  • Base class -> Derived class and vice-versa conversions in C++

    - by Ivan Nikolaev
    Hi! I have the following example code: #include <iostream> #include <string> using namespace std; class Event { public: string type; string source; }; class KeyEvent : public Event { public: string key; string modifier; }; class MouseEvent : public Event { public: string button; int x; int y; }; void handleEvent(KeyEvent e) { if(e.key == "ENTER") cout << "Hello world! The Enter key was pressed ;)" << endl; } Event generateEvent() { KeyEvent e; e.type = "KEYBOARD_EVENT"; e.source = "Keyboard0"; e.key = "SPACEBAR"; e.modifier = "none"; return e; } int main() { KeyEvent e = generateEvent(); return 0; } I can't compile it, G++ throws an error of kind: main.cpp: In function 'int main()': main.cpp:47:29: error: conversion from 'Event' to non-scalar type 'KeyEvent' requested I know that the error is obvious for C++ guru's, but I can't understand why I can't do the conversion from base class object to derived one. Can someone suggest me the solution of the problem that I have? Thx in advice

    Read the article

  • openCV Won't copy to image after changed color ( opencv and c++ )

    - by user1656647
    I am a beginner at opencv. I have this task: Make a new image Put a certain image in it at 0,0 Convert the certain image to gray scale put the grayscaled image next to it ( at 300, 0 ) This is what I did. I have a class imagehandler that has constructor and all the functions. cv::Mat m_image is the member field. Constructor to make new image: imagehandler::imagehandler(int width, int height) : m_image(width, height, CV_8UC3){ } Constructor to read image from file: imagehandler::imagehandler(const std::string& fileName) : m_image(imread(fileName, CV_LOAD_IMAGE_COLOR)) { if(!m_image.data) { cout << "Failed loading " << fileName << endl; } } This is the function to convert to grayscale: void imagehandler::rgb_to_greyscale(){ cv::cvtColor(m_image, m_image, CV_RGB2GRAY); } This is the function to copy paste image: //paste image to dst image at xloc,yloc void imagehandler::copy_paste_image(imagehandler& dst, int xLoc, int yLoc){ cv::Rect roi(xLoc, yLoc, m_image.size().width, m_image.size().height); cv::Mat imageROI (dst.m_image, roi); m_image.copyTo(imageROI); } Now, in the main, this is what I did : imagehandler CSImg(600, 320); //declare the new image imagehandler myimg(filepath); myimg.copy_paste_image(CSImg, 0, 0); CSImg.displayImage(); //this one showed the full colour image correctly myimg.rgb_to_greyscale(); myimg.displayImage(); //this shows the colour image in GRAY scale, works correctly myimg.copy_paste_image(CSImg, 300, 0); CSImg.displayImage(); // this one shows only the full colour image at 0,0 and does NOT show the greyscaled one at ALL! What seems to be the problem? I've been scratching my head for hours on this one!!!

    Read the article

  • argv Memory Allocation

    - by Joshua Green
    I was wondering if someone could tell me what I am doing wrong that I get this Unhandled Exception error message: 0xC0000005: Access violation reading location 0x0000000c. with a green pointer pointing at my first Prolog code (fid_t): Here is my header file: class UserTaskProlog { public: UserTaskProlog( ArRobot* r ); ~UserTaskProlog( ); protected: int cycles; char* argv[ 1 ]; term_t tf; term_t tx; term_t goal_term; functor_t goal_functor; ArRobot* robot; void logTask( ); }; And here is my main code: UserTaskProlog::UserTaskProlog( ArRobot* r ) : robot( r ), robotTaskFunc( this, &UserTaskProlog::logTask ) { cycles = 0; argv[ 0 ] = "libpl.dll"; argv[ 1 ] = NULL; PL_initialise( 1, argv ); PlCall( "consult( 'myPrologFile.pl' )" ); robot->addSensorInterpTask( "UserTaskProlog", 50, &robotTaskFunc ); } UserTaskProlog::~UserTaskProlog( ) { robot->remSensorInterpTask( &robotTaskFunc ); } void UserTaskProlog::logTask( ) { cycles++; fid_t fid = PL_open_foreign_frame( ); tf = PL_new_term_ref( ); PL_put_integer( tf, 5 ); tx = PL_new_term_ref( ); goal_term = PL_new_term_ref( ); goal_functor = PL_new_functor( PL_new_atom( "factorial" ), 2 ); PL_cons_functor( goal_term, goal_functor, tf, tx ); int fact; if ( PL_call( goal_term, NULL ) ) { PL_get_integer( tx, &fact ); cout << fact << endl; } PL_discard_foreign_frame( fid ); } int main( int argc, char** argv ) { ArRobot robot; ArArgumentParser argParser( &argc, argv ); UserTaskProlog talk( &robot ); } Thank you,

    Read the article

  • Why `A & a = a` is valid?

    - by psaghelyi
    #include <iostream> #include <assert.h> using namespace std; struct Base { Base() : m_member1(1) {} Base(const Base & other) { assert(this != &other); // this should trigger m_member1 = other.m_member1; } int m_member1; }; struct Derived { Derived(Base & base) : m_base(m_base) {} // m_base(base) Base & m_base; }; void main() { Base base; Derived derived(base); cout << derived.m_base.m_member1 << endl; // crashes here } The above example is a synthesized version of a mistyped constructor. I used reference at the class member Derived::m_base because I wanted to make sure that the member will be initialized as the constructor had called. One problem is that nor GCC nor MSVC gives me a warning at m_base(m_base). But the more serious for me is that the assert finds everything fine and the application crashes later (sometimes far away from the mistake). Question: Is there any way to indicate such mistakes?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >