Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 546/1758 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Arbitrary projection matrix from 6 arbitrary frustum planes

    - by Doub
    A projection matrix represent a tranformation from the camera view space to the rendering system clip space. In other words, it defines the transormation between a 6-sided frustum to the clip cube. The glOrtho and glFrustum use only 6 parameter to define such a projection, but impose several constraints on the frustum that will get projected to the clip cube: the near and far planes are parallel, the left and right planes intersect on a vertical line, and the top and bottom planes intersect on a horizontal lines, both lines being parallel to the near and far planes. I'd like to lift these restrictions. So, from the definition of the 6 frustum side planes (in whatever representation you see fit), how can I compute a general projection matrix?

    Read the article

  • Render full-screen gradient or texture

    - by Filip Skakun
    What's the simplest way to fill the background of the screen with a gradient or a texture in Direct3D 10/11? I'm building a Windows 8 metro app in which the camera never moves and I render some content in D3D, but I need to fill the background with something else than a solid color. Do I need to figure out the size and position of a rectangle and position it in 3D space or can I have some simpler solution? I don't care about depth at all, I don't use any depth buffer since all my content is sorted back to front, so I could just start by drawing to the background.

    Read the article

  • Network Authentication when running exe from WMI

    - by Andy
    Hi, I have a C# exe that needs to be run using WMI and access a network share. However, when I access the share I get an UnauthorizedAccessException. If I run the exe directly the share is accessible. I am using the same user account in both cases. There are two parts to my application, a GUI client that runs on a local PC and a backend process that runs on a remote PC. When the client needs to connect to the backend it first launches the remote process using WMI (code reproduced below). The remote process does a number of things including accessing a network share using Directory.GetDirectories() and reports back to the client. When the remote process is launched automatically by the client using WMI, it cannot access the network share. However, if I connect to the remote machine using Remote Desktop and manually launch the backend process, access to the network share succeeds. The user specifed in the WMI call and the user logged in for the Remote Desktop session are the same, so the permissions should be the same, shouldn't they? I see in the MSDN entry for Directory.Exists() it states "The Exists method does not perform network authentication. If you query an existing network share without being pre-authenticated, the Exists method will return false." I assume this is related? How can I ensure the user is authenticated correctly in a WMI session? ConnectionOptions opts = new ConnectionOptions(); opts.Username = username; opts.Password = password; ManagementPath path = new ManagementPath(string.Format("\\\\{0}\\root\\cimv2:Win32_Process", remoteHost)); ManagementScope scope = new ManagementScope(path, opts); scope.Connect(); ObjectGetOptions getOpts = new ObjectGetOptions(); using (ManagementClass mngClass = new ManagementClass(scope, path, getOpts)) { ManagementBaseObject inParams = mngClass.GetMethodParameters("Create"); inParams["CommandLine"] = commandLine; ManagementBaseObject outParams = mngClass.InvokeMethod("Create", inParams, null); }

    Read the article

  • Game Code Design for Rendering

    - by kuroutadori
    I first created a game on the iPhone and I'm now porting it to Android. I wrote most of the code in C++, but when it came to porting it wasn't so easy. The Android way is to have two threads, one for rendering and one for updating. This due to some devices blocking when updating the hardware. My problem is that I am coming from the iPhone. When I transition, say from the Menu to the Game, I would stop the Animation (Rendering) and load up the next Manager (the Menu has a Manager and so has the Game). I could implement the same thing on Android, but I have noticed on game ports like Quake, don't do this - as far as I can tell. I have learnt that I cannot just dynamically add another Renderer class the the tree because I will probably get a dequeuing buffer error - which I believe to be a problem with the OpenGL ES side. So how is it done?

    Read the article

  • Question about component based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • Mac mini 2012 graphic upgrade for UE4 Unity3D Blender

    - by DaCrAn
    I have a mac mini (late 2012) i7, 16gb ram Vengeance graphic card intel HD4000. I buy recently a thunderbolt expansion PCIE whit support a graphic card PCIE 2.0 16x whit space for Full leght card. I have dubts about what graphic card gona give me the best results for using the Unreal Engine 4 UE4 or Unity3D, and Blender. My badget cover a Nvidia Quadro K4000 3gb or ATI Firepro W7000 4gb. Any recomendation? What professional graphic card can be better for design games in 3D? Thanks. DaCrAn

    Read the article

  • Detecting collision between ball (circle) and brick(rectangle)?

    - by James Harrison
    Ok so this is for a small uni project. My lecturer provided me with a framework for a simple brickbreaker game. I am currently trying to overcome to problem of detecting a collision between the two game objects. One object is always the ball and the other objects can either be the bricks or the bat. public Collision hitBy( GameObject obj ) { //obj is the bat or the bricks //the current object is the ball // if ball hits top of object if(topX + width >= obj.topX && topX <= obj.topX + obj.width && topY + height >= obj.topY - 2 && topY + height <= obj.topY){ return Collision.HITY; } //if ball hits left hand side else if(topY + height >= obj.topY && topY <= obj.topY + obj.height && topX + width >= obj.topX -2 && topX + width <= obj.topX){ return Collision.HITX; } else return Collision.NO_HIT; } So far I have a method that is used to detect this collision. The the current obj is a ball and the obj passed into the method is the the bricks. At the moment I have only added statement to check for left and top collisions but do not want to continue as I have a few problems. The ball reacts perfectly if it hits the top of the bricks or bat but when it hits the ball often does not change directing. It seems that it is happening toward the top of the left hand edge but I cannot figure out why. I would like to know if there is another way of approaching this or if people know where I'm going wrong. Lastly the collision.HITX calls another method later on the changes the x direction likewise with y.

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • My frustum culling is culling from the wrong point

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] /= length; m_Planes[side][1] /= length; m_Planes[side][2] /= length; m_Planes[side][3] /= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } What am i doing wrong?

    Read the article

  • How do I pass an object location into a vertex shader?

    - by Greg Kassapidis
    I am using Blender Game Engine. I want to create a large flat plane, and deform it locally near a moving object. So far (despite being a beginner at shaders) I've written a vertex shader for the plane which moves the vertices to their correct positions (constant positions, for now). I cannot find a way to swap that constant location with an object's location updated every frame, while the shader is running. I am not even sure if it's possible. I only want to access a specific object's center from the shader.

    Read the article

  • Right way to create [self]respawning app in python

    - by grapescan
    I am using jabber bot written in python to log some MUC talks. Sometimes it drops on some network or XMPP problems. In this case I have to start it again by myself. The goal is to make it "self-respawning". I have some variants about how to do it. Bot is one process. Another process monitors its activity and starts it if bot died. Main process spawns bot subprocess and controls it. Also I think daemonizing bot process is useful here. Platform is Linux, as you could guess. What is the right way to solve this problem?

    Read the article

  • How do I cap rendering of tiles in a 2D game with SDL?

    - by farmdve
    I have some boilerplate code working, I basically have a tile based map composed of just 3 colors, and some walls and render with SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). I have pretty basic collision detection and it works, I can also detetc continuous presses, which allows me to move pretty much anywhere I want. I also have a moving camera, which follows the object. The problem is that, the tile based map is bigger than the resolution, thus not all of the map can be displayed on the screen, but it's still rendered. I would like to cap it, but since this is new to me, I pretty much have no idea. Although I cannot post all the code, as even though I am a newbie and the code pretty basic, it's already quite a few lines, I can post what I tried to do void set_camera() { //Center the camera over the dot camera.x = ( player.box.x + DOT_WIDTH / 2 ) - SCREEN_WIDTH / 2; camera.y = ( player.box.y + DOT_HEIGHT / 2 ) - SCREEN_HEIGHT / 2; //Keep the camera in bounds. if(camera.x < 0 ) { camera.x = 0; } if(camera.y < 0 ) { camera.y = 0; } if(camera.x > LEVEL_WIDTH - camera.w ) { camera.x = LEVEL_WIDTH - camera.w; } if(camera.y > LEVEL_HEIGHT - camera.h ) { camera.y = LEVEL_HEIGHT - camera.h; } } set_camera() is the function which calculates the camera position based on the player's positions. I won't pretend I know much about it. Rectangle box = {0,0,0,0}; for(int t = 0; t < TOTAL_TILES; t++) { if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) apply_surface(box.x - camera.x, box.y - camera.y, surface, screen, &clips[tiles[t]]); box.x += TILE_WIDTH; //If we've gone too far if(box.x >= LEVEL_WIDTH) { //Move back box.x = 0; //Move to the next row box.y += TILE_HEIGHT; } } This is basically my render code. The for loop loops over 192 tiles stored in an int array, each with their own unique value describing the tile type(wall or one of three possible colored tiles). box is an SDL_Rect containing the current position of the tile, which is calculated on render. TILE_HEIGHT and TILE_WIDTH are of value 80. So the cap is determined by if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) However, this is just me playing with the values and see what doesn't break it. I pretty much have no idea how to calculate it. My screen resolution is 1024/768, and the tile map is of size 1280/960.

    Read the article

  • Is there any advantage in using DX10/11 for a 2D game?

    - by David Gouveia
    I'm not entirely familiar with the feature set introduced by DX10/11 class hardware. I'm vaguely familiar with the new stages added to the programmable graphics pipeline, such as the geometry shader, the compute shader, and the new tesselation stages. I don't see how any of these make much of a difference for a 2D game though. Is there any compelling reason to make the switch to DX10/11 (or the OpenGL equivalents) for a 2D game, or would it be wiser to stick with DX9 considering that that a significant share of the market still runs on older technologies (e.g. the February 2012 Steam surveys lists around 17% of users as still using Windows XP)?

    Read the article

  • Collision detection code style

    - by Marian Ivanov
    Not only there are two useful broad-phase algorithms and a lot of useful narrowphase algorithms, there are also multiple code styles. Arrays vs. calling Make an array of broadphase checks, then filter them with narrowphase checks, then resolve them. function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ possibleCollisions = getPossibleCollisions(b,a->get(index)); for(i=0; i<possibleCollitionsNumber; i++){ if(narrowphase(possibleCollisions[i],a[index])) { collisions->push(possibleCollisions[i]); }; }; for(i=0; i<collitionsNumber; i++){ //CODE FOR RESOLUTION }; }; Make the broadphase call the narrowphase, and the narrowphase call the resolution function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ broadphase(b,a->get(index)); }; function broadphase(thingy * with, thingy * what){ while(blah){ //blahcode narrowphase(what,collidingThing); }; }; Events vs. in-the-loop Fire an event. This abstracts the check away, but it's trickier to make an equal interaction. a[index] -> collisionEvent(eventdata); //much later int collisionEvent(eventdata){ //resolution gets here } Resolve the collision inside the loop. This glues narrowphase and resolution into one layer. if(narrowphase(possibleCollisions[i],a[index])) { //CODE GOES HERE }; The questions are: Which of the first two is better, and how am I supposed to make a zero-sum Newtonian interaction under B1.

    Read the article

  • How is the gimbal locked problem solved using accumulative matrix transformations

    - by Luke San Antonio
    I am reading the online "Learning Modern 3D Graphics Programming" book by Jason L. McKesson As of now, I am up to the gimbal lock problem and how to solve it using quaternions. However right here, at the Quaternions page. Part of the problem is that we are trying to store an orientation as a series of 3 accumulated axial rotations. Orientations are orientations, not rotations. And orientations are certainly not a series of rotations. So we need to treat the orientation of the ship as an orientation, as a specific quantity. I guess this is the first spot I start to get confused, the reason is because I don't see the dramatic difference between orientations and rotations. I also don't understand why an orientation cannot be represented by a series of rotations... Also: The first thought towards this end would be to keep the orientation as a matrix. When the time comes to modify the orientation, we simply apply a transformation to this matrix, storing the result as the new current orientation. This means that every yaw, pitch, and roll applied to the current orientation will be relative to that current orientation. Which is precisely what we need. If the user applies a positive yaw, you want that yaw to rotate them relative to where they are current pointing, not relative to some fixed coordinate system. The concept, I understand, however I don't understand how if accumulating matrix transformations is a solution to this problem, how the code given in the previous page isn't just that. Here's the code: void display() { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glutil::MatrixStack currMatrix; currMatrix.Translate(glm::vec3(0.0f, 0.0f, -200.0f)); currMatrix.RotateX(g_angles.fAngleX); DrawGimbal(currMatrix, GIMBAL_X_AXIS, glm::vec4(0.4f, 0.4f, 1.0f, 1.0f)); currMatrix.RotateY(g_angles.fAngleY); DrawGimbal(currMatrix, GIMBAL_Y_AXIS, glm::vec4(0.0f, 1.0f, 0.0f, 1.0f)); currMatrix.RotateZ(g_angles.fAngleZ); DrawGimbal(currMatrix, GIMBAL_Z_AXIS, glm::vec4(1.0f, 0.3f, 0.3f, 1.0f)); glUseProgram(theProgram); currMatrix.Scale(3.0, 3.0, 3.0); currMatrix.RotateX(-90); //Set the base color for this object. glUniform4f(baseColorUnif, 1.0, 1.0, 1.0, 1.0); glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, glm::value_ptr(currMatrix.Top())); g_pObject->Render("tint"); glUseProgram(0); glutSwapBuffers(); } To my understanding, isn't what he is doing (modifying a matrix on a stack) considered accumulating matrices, since the author combined all the individual rotation transformations into one matrix which is being stored on the top of the stack. My understanding of a matrix is that they are used to take a point which is relative to an origin (let's say... the model), and make it relative to another origin (the camera). I'm pretty sure this is a safe definition, however I feel like there is something missing which is blocking me from understanding this gimbal lock problem. One thing that doesn't make sense to me is: If a matrix determines the difference relative between two "spaces," how come a rotation around the Y axis for, let's say, roll, doesn't put the point in "roll space" which can then be transformed once again in relation to this roll... In other words shouldn't any further transformations to this point be in relation to this new "roll space" and therefore not have the rotation be relative to the previous "model space" which is causing the gimbal lock. That's why gimbal lock occurs right? It's because we are rotating the object around set X, Y, and Z axes rather than rotating the object around it's own, relative axes. Or am I wrong? Since apparently this code I linked in isn't an accumulation of matrix transformations can you please give an example of a solution using this method. So in summary: What is the difference between a rotation and an orientation? Why is the code linked in not an example of accumulation of matrix transformations? What is the real, specific purpose of a matrix, if I had it wrong? How could a solution to the gimbal lock problem be implemented using accumulation of matrix transformations? Also, as a bonus: Why are the transformations after the rotation still relative to "model space?" Another bonus: Am I wrong in the assumption that after a transformation, further transformations will occur relative to the current? Also, if it wasn't implied, I am using OpenGL, GLSL, C++, and GLM, so examples and explanations in terms of these are greatly appreciated, if not necessary. The more the detail the better! Thanks in advance...

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Cocos2d-x v3.1 for WinPhone 8 auto change texture after resumed from background

    - by Bình Nguyên
    I have some sprites in my game (for Windows Phone 8). These are my steps to reproduce the problem: Open game Play (this is an optional step) Press Windows button to send game to background Press Back button to resume game The problem is: After the game has resumed, some sprites exchange textures, some sprites go black (like there is no texture being bound). I'm using cocos2dx version 3.1.1. Can someone help me to solve this problem?

    Read the article

  • Isometric algorithm producing tiles in wrong draw order

    - by David
    I've been toying with isometric and I just cant get the tiles to be in the right order. I'm probably missing something obvious and I just can't see it. Even at the risk of looking stupid, here's my code: for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } And here's what I end up with: messed up map Yep, bit of an issue. If anyone could help, I'd appreciate it.

    Read the article

  • problem with piping in my own implementation of shell

    - by codemax
    Hey guys, i am implementing my own shell. I want to involve piping. i searched here and i got a code. But it is not working.Can any one help me? this is my code #include <sys/types.h> #include <sys/wait.h> #include <sys/ipc.h> #include <fcntl.h> #include <unistd.h> #include <string.h> #include <iostream> #include <cstdlib> using namespace std; char temp1[81][81],temp2[81][81] ,*cmdptr1[40], *cmdptr2[40]; void process(char**,int); int arg_count, count; int arg_cnt[2]; int pip,tok; char input[81]; int fds[2]; void process( char* cmd[])//, int arg_count ) { pid_t pid; pid = fork(); //char path[81]; //getcwd(path,81); //strcat(path,"/"); //strcat(path,cmd[0]); if(pid < 0) { cout << "Fork Failed" << endl; exit(-1); } else if( pid == 0 ) { execvp( cmd[0] , cmd ); } else { wait(NULL); } } void pipe(char **cmd1, char**cmd2) { cout<<endl<<endl<<"in pipe"<<endl; for(int i=0 ; i<arg_cnt[0] ; i++) { cout<<cmdptr1[i]<<" "; } cout<<endl; for(int i=0 ; i<arg_cnt[1] ; i++) { cout<<cmdptr2[i]<<" "; } pipe(fds); if (fork() == 0 ) { dup2(fds[1], 1); close(fds[0]); close(fds[1]); process(cmd1); } if (fork() == 0) { dup2(fds[0], 0); close(fds[0]); close(fds[1]); process(cmd2); } close(fds[0]); close(fds[1]); wait(NULL); } void pipecommand(char** cmd1, char** cmd2) { cout<<endl<<endl; for(int i=0 ; i<arg_cnt[0] ; i++) { cout<<cmd1[i]<<" "; } cout<<endl; for(int i=0 ; i<arg_cnt[1] ; i++) { cout<<cmd2[i]<<" "; } int fds[2]; // file descriptors pipe(fds); // child process #1 if (fork() == 0) { // Reassign stdin to fds[0] end of pipe. dup2(fds[0], STDIN_FILENO); close(fds[1]); close(fds[0]); process(cmd2); // child process #2 if (fork() == 0) { // Reassign stdout to fds[1] end of pipe. dup2(fds[1], STDOUT_FILENO); close(fds[0]); close(fds[1]); // Execute the first command. process(cmd1); } wait(NULL); } close(fds[1]); close(fds[0]); wait(NULL); } void splitcommand1() { tok++; int k,done=0,no=0; arg_count = 0; for(int i=count ; input[i] != '\0' ; i++) { k=0; while(1) { count++; if(input[i] == ' ') { break; } if((input[i] == '\0')) { done = 1; break; } if(input[i] == '|') { pip = 1; done = 1; break; } temp1[arg_count][k++] = input[i++]; } temp1[arg_count][k++] = '\0'; arg_count++; if(done == 1) { break; } } for(int i=0 ; i<arg_count ; i++) { cmdptr1[i] = temp1[i]; } arg_cnt[tok] = arg_count; } void splitcommand2() { tok++; cout<<"count is :"<<count<<endl; int k,done=0,no=0; arg_count = 0; for(int i=count ; input[i] != '\0' ; i++) { k=0; while(1) { count++; if(input[i] == ' ') { break; } if((input[i] == '\0')) { done = 1; break; } if(input[i] == '|') { pip = 1; done = 1; cout<<"PIP"; break; } temp2[arg_count][k++] = input[i++]; } temp2[arg_count][k++] = '\0'; arg_count++; if(done == 1) { break; } } for(int i=0 ; i<arg_count ; i++) { cmdptr2[i] = temp2[i]; } arg_cnt[tok] = arg_count; } int main() { cout<<endl<<endl<<"Welcome to unique shell !!!!!!!!!!!"<<endl; tok=-1; while(1) { cout<<endl<<"***********UNIQUE**********"<<endl; cin.getline(input,81); count = 0,pip=0; splitcommand1(); if(pip == 1) { count++; splitcommand2(); } cout<<endl<<endl; if(strcmp(cmdptr1[0], "exit") == 0 ) { cout<<endl<<"EXITING UNIQUE SHELL"<<endl; exit(0); } //cout<<endl<<"Arg count is :"<<arg_count<<endl; if(pip == 1) { cout<<endl<<endl<<"in main :"; for(int i=0 ; i<arg_cnt[0] ; i++) { cout<<cmdptr1[i]<<" "; } cout<<endl; for(int i=0 ; i<arg_cnt[1] ; i++) { cout<<cmdptr2[i]<<" "; } pipe(cmdptr1, cmdptr2); } else { process (cmdptr1);//,arg_count); } } } I know it is not well coded. But try to help me :(

    Read the article

  • (libgdx) Button doesn't work

    - by StercoreCode
    At the game I choose StopScreen. At this screen displays button. But if I click it - it doesn't work. What I expect - when I press button it must restart game. At this stage must display at least a message that the button is pressed. I tried to create new and clear project. Main class implement ApplicationListener. I put the same code in the appropriate methods. And it's works! But if i create this button in my game - it doesn't work. When i play and go to the StopScreen, i saw button. But if i click, or touch, nothing happens. I think that the proplem at the InputListener, although i set the stage as InputProcessor. Gdx.input.setInputProcessor(stage); I also try to addListener for Button as ClickListener. But it gave no results. Or it maybe problem that i implements Screen method - not ApplicationListener or Game. But if StopScreen implement ApplicationListener, at the mainGame I can't to setScreen. Just interests question: why button displays but nothing happens to it? Here is the code of StopScreen if it helps find my mistake: public class StopScreen implements Screen{ private OrthographicCamera camera; private SpriteBatch batch; public Stage stage; //** stage holds the Button **// private BitmapFont font; //** same as that used in Tut 7 **// private TextureAtlas buttonsAtlas; //** image of buttons **// private Skin buttonSkin; //** images are used as skins of the button **// public TextButton button; //** the button - the only actor in program **// public StopScreen(CurrusGame currusGame) { camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); batch = new SpriteBatch(); buttonsAtlas = new TextureAtlas("button.pack"); //** button atlas image **// buttonSkin = new Skin(); buttonSkin.addRegions(buttonsAtlas); //** skins for on and off **// font = AssetLoader.font; //** font **// stage = new Stage(); stage.clear(); Gdx.input.setInputProcessor(stage); TextButton.TextButtonStyle style = new TextButton.TextButtonStyle(); style.up = buttonSkin.getDrawable("ButtonOff"); style.down = buttonSkin.getDrawable("ButtonOn"); style.font = font; button = new TextButton("PRESS ME", style); //** Button text and style **// button.setPosition(100, 100); //** Button location **// button.setHeight(100); //** Button Height **// button.setWidth(100); //** Button Width **// button.addListener(new InputListener() { public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Pressed"); return true; } public void touchUp(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Released"); } }); stage.addActor(button); } @Override public void render(float delta) { Gdx.gl.glClearColor(0, 1, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); stage.act(); batch.setProjectionMatrix(camera.combined); batch.begin(); stage.draw(); batch.end(); }

    Read the article

  • Html 5 ping pong game side collision problem

    - by Gurjit
    I am making a simple ping pong game where I am facing a side collision problem means when the ball collides with the either side of the paddle . Although I have written code for making it works but something is failing....I want plz someone to give suggestions and tell how to avoid it. Means while trying to hit the ball with side face of the paddle poses a problem.!! Here is the main part of the code causing problem function checkCollision(){ ///// This is collision detection for the upper part ///// if( cy + radius >= paddleTop && cx + radius > paddleLeft && cy + radius >= paddleTop + 5 && cx - radius <= paddleLeft + paddleWidth ) { dy = -dy; ++hits; /// On collision we are increasing the Score playSound(); } else if( cy + radius >= paddleTop && cy + radius <= paddleTop + paddleHeight && cx + radius >= paddleLeft && cy - radius <= paddleLeft - (radius + 1) ) { dx = -dx; } } here is working fiddle for it :- http://jsfiddle.net/gurjitmehta/orzpzf69/

    Read the article

  • SFML - Moving a sprite on mouseclick

    - by Mike
    I want to be able to move a sprite from a current location to another based upon where the user clicks in the window. This is the code that I have: #include <SFML/Graphics.hpp> int main() { // Create the main window sf::RenderWindow App(sf::VideoMode(800, 600), "SFML window"); // Load a sprite to display sf::Texture Image; if (!Image.LoadFromFile("cb.bmp")) return EXIT_FAILURE; sf::Sprite Sprite(Image); // Define the spead of the sprite float spriteSpeed = 200.f; // Start the game loop while (App.IsOpened()) { if (sf::Keyboard::IsKeyPressed(sf::Keyboard::Escape)) App.Close(); if (sf::Mouse::IsButtonPressed(sf::Mouse::Right)) { Sprite.SetPosition(sf::Mouse::GetPosition(App).x, sf::Mouse::GetPosition(App).y); } // Clear screen App.Clear(); // Draw the sprite App.Draw(Sprite); // Update the window App.Display(); } return EXIT_SUCCESS; } But instead of just setting the position I want to use Sprite.Move() and gradually move the sprite from one position to another. The question is how? Later I plan on adding a node system into each map so I can use Dijkstra's algorithm, but I'll still need this for moving between nodes.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >