Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 476/1071 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Creating an interactive grid for a puzzle game

    - by Noupoi
    I am trying to make a slitherlink game, and am not too sure how to approach creating the game, more specifically the grid structure on which the puzzle will be played on. This is what a empty and completed slitherlink grid would look like: The numbers in the squares are sort of clues and the areas between the dots need to be clickable: I would like to create the game in VB .NET. What data structures should I try to use, and would it be beneficial using any frameworks such as XNA?

    Read the article

  • how can i get rotation vector from matrix4x4 in xna?

    - by mr.Smyle
    i want to get rotation vector from matrix to realize some parent-children system for models. Matrix bonePos = link.Bone.Transform * World; Matrix m = Matrix.CreateTranslation(link.Offset) * Matrix.CreateScale(link.gameObj.Scale.X, link.gameObj.Scale.Y, link.gameObj.Scale.Z) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(link.gameObj.Rotation.Y), MathHelper.ToRadians(link.gameObj.Rotation.X), MathHelper.ToRadians(link.gameObj.Rotation.Z)) //need rotation vector from bone matrix here (now it's global model rotation vector) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(Rotation.Y), MathHelper.ToRadians(Rotation.X), MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(bonePos.Translation); link.gameObj.World = m; where : link - struct with children model settings, like position, rotation etc. And link.Bone - Parent Bone

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • Building View Matrix in Direct3D11

    - by Balls
    Am I doing it right? I converted this. m_ViewMatrix = XMMatrixLookAtLH(XMLoadFloat3(&m_Position), lookAtVector, upVector); to this one. XMVECTOR vz = XMVector3Normalize( lookAtVector - XMLoadFloat3(&m_Position) ); XMVECTOR vx = XMVector3Normalize( XMVector3Cross( upVector, vz ) ); XMVECTOR vy = XMVector3Cross( vz, vx ); m_ViewMatrix.r[0] = vx; m_ViewMatrix.r[1] = vy; m_ViewMatrix.r[2] = vz; m_ViewMatrix.r[3] = XMLoadFloat3(&m_Position); m_ViewMatrix.r[0].m128_f32[3] = 0.0f; m_ViewMatrix.r[1].m128_f32[3] = 0.0f; m_ViewMatrix.r[2].m128_f32[3] = 0.0f; m_ViewMatrix.r[3].m128_f32[3] = 1.0f; m_ViewMatrix = XMMatrixInverse( &XMMatrixDeterminant(m_ViewMatrix), m_ViewMatrix ); Everything looks fine when I run it. Another question is, I saw on this site(http://webglfactory.blogspot.com/2011/06/how-to-create-view-matrix.html) that he subtracted lookat from position in his vector vz. I tried it but gave me wrong view matrix. Can anyone check my code. I'm studying linear algebra right now. Sucks my course doesn't have one. Thank you, Balls

    Read the article

  • GLSL: Strange light reflections

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices? SOLVED Solved... 3 days needed for changing one letter from this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(12*sizeof(float)) // array buffer offset ); to this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(11*sizeof(float)) // array buffer offset ); see difference? :)

    Read the article

  • Kinect Click counter function

    - by Sweta Dwivedi
    So i have the following kinect click function which will check if the hand is within the bounds then it will click with a counter . . however there is a slight problem . .the first few button clicks work fine.. but after it clicks one of the buttons it changes the game state and immediately clicks the other button without the counter reaching 200. . . Kinect click is a method in the button class. . .and each button inside a list can access the Kinect click method. . . public bool KinectClick(int x,int y) { if ((x >= position.X && x <= position.X + position.Width) && (y >= position.Y && y <= position.Y + position.Height)) { counter++; if (counter > 200) { counter = 0; return true; } } else { counter = 0; } return false; } I call to check if this property is true in the Game update method to act as a button click. . foreach(Button g_t in Game_theme) { if ((g_t.KinectClick(x_c, y_c) == true || g_t.ButtonClicked() == true) && g_t.name == "animoe") { Selected_anim = true; currentGameState = GameState.InGame; } if ((g_t.KinectClick(x_c, y_c) == true || g_t.ButtonClicked() == true) && g_t.name == "planet") { Selected_planet = true; currentGameState = GameState.InGame; }

    Read the article

  • What are the pro/cons of Unity3D as a choice to make games?

    - by jokoon
    We are doing our school project with Unity3d, since they were using Shiva the previous year (which seems horrible to me), and I wanted to know your point of view for this tool. Pros: multi platform, I even heard Google is going to implement it in Chrome everything you need is here scripting languages makes it a good choice for people who are not programming gurus Cons: multiplayer ? proprietary, you are totally dependent of unity and its limit and can't extend it it's less "making a game from scratch" C++ would have been a cool thing I really think this kind of tool is interesting, but is it worth it to use at school for a project that involves more than 3 programming persons ? What do we really learn in term of programming from using this kind of tool (I'm ok with python and js, but I hate C#) ? We could have use Ogre instead, even if we were learning direct x starting january...

    Read the article

  • GUI for DirectX

    - by DeadMG
    I'm looking for a GUI library built on top of DirectX- preferably 9, but I can also do 11. I've looked at stuff like DXUT, but it's way too much for me- I'm only needing some UI controls which I would rather not write (and debug) myself, and their need to keep a C-compatible API is definitely a big downside. I'd rather look at UI libs that are designed to be integrated into an existing DirectX-based system, rather than forming the basis of a system. Any recommendations?

    Read the article

  • How to scroll hex tiles?

    - by Chris Evans
    I don't seem to be able to find an answer to this one. I have a map of hex tiles. I wish to implement scrolling. Code at present: drawTilemap = function() { actualX = Math.floor(viewportX / hexWidth); actualY = Math.floor(viewportY / hexHeight); offsetX = -(viewportX - (actualX * hexWidth)); offsetY = -(viewportY - (actualY * hexHeight)); for(i = 0; i < (10); i++) { for(j = 0; j < 10; j++) { if(i % 2 == 0) { x = (hexOffsetX * i) + offsetX; y = j * sourceHeight; } else { x = (hexOffsetX * i) + offsetX; y = hexOffsetY + (j * sourceHeight); } var tileselected = mapone[actualX + i][j]; drawTile(x, y, tileselected); } } } The code I've written so far only handles X movement. It doesn't yet work the way it should do. If you look at my example on jsfiddle.net below you will see that when moving to the right, when you get to the next hex tile along, there is a problem with the X position and calculations that have taken place. It seems it is a simple bit of maths that is missing. Unfortunately I've been unable to find an example that includes scrolling yet. http://jsfiddle.net/hd87E/1/ Make sure there is no horizontal scroll bar then trying moving right using the - right arrow on the keyboard. You will see the problem as you reach the end of the first tile. Apologies for the horrid code, I'm learning! Cheers

    Read the article

  • Optimized algorithm for line-sphere intersection in GLSL

    - by fernacolo
    Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way: // The line passes through p1 and p2: vec3 p1 = (...); vec3 p2 = (...); // Sphere center is p3, radius is r: vec3 p3 = (...); float r = ...; float x1 = p1.x; float y1 = p1.y; float z1 = p1.z; float x2 = p2.x; float y2 = p2.y; float z2 = p2.z; float x3 = p3.x; float y3 = p3.y; float z3 = p3.z; float dx = x2 - x1; float dy = y2 - y1; float dz = z2 - z1; float a = dx*dx + dy*dy + dz*dz; float b = 2.0 * (dx * (x1 - x3) + dy * (y1 - y3) + dz * (z1 - z3)); float c = x3*x3 + y3*y3 + z3*z3 + x1*x1 + y1*y1 + z1*z1 - 2.0 * (x3*x1 + y3*y1 + z3*z1) - r*r; float test = b*b - 4.0*a*c; if (test >= 0.0) { // Hit (according to Treebeard, "a fine hit"). float u = (-b - sqrt(test)) / (2.0 * a); vec3 hitp = p1 + u * (p2 - p1); // Now use hitp. } It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways: Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!

    Read the article

  • Index Check and Correct Character Display in a Console Hangman Game for Java

    - by Jen
    I have this problem wherein, I can not display the correct characters given by the character. Here's what I meant: String words, in; String replaced_words; Scanner s = new Scanner (System.in); System.out.println("Enter a line of words basing on an event, verse, place or a name of a person."); words = s.nextLine(); System.out.println("The words you just placed are now accepted."); //using char array method, we tried to place the words into a characters array. char [] c = words.toCharArray(); // we need to replace the replaced_words = words.replace(' ', '_').replaceAll("[^\\-]", "-"); for (int i = 0; i < replaced_words.length(); i++) { System.out.print(replaced_words.charAt(i) + " "); } System.out.println("Now, please input a character, guessing the words you just placed."); in = s.nextLine(); in that code, want that the user, when types a word (or should it be character?), any of the correct character the user inputs will be displayed, and changes the hyphen to it...(more like the hangman series of games). How can I achieve this?

    Read the article

  • Path tables or real time searching for AI?

    - by SirYakalot
    What is the more common practice in commercial games; path lookup tables or real time searches? I've read that in many games path lookup tables are pre-calculated and baked into each map, so to speak, then steering behaviour is used to handle dynamic obstacles. or is it better practice to use optimised hierarchical A* searches? I understand the pro's and cons of each, I'm just curious as to what is most often used in the industry.

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Procedural object generation and unique identification

    - by 2080
    My question relates to procedural content generation and data management of the emerging objects in a database. I assume a networked game, with a server-client model. Unspecified objects in the game world are generated while the game is running with procedural algorithms (for example perlin noise). The players (/clients) can modify the properties of these objects, but have to notify the server of these changes. How could this communication address unique objects, so that both the server and the client know of which object they are speaking? Not only the inner properties of the objects can differ, but also visible, such as the position. When the player wants to select one of these objects the game has to find out the id - does anyone know which methods or algorithms can accomplish that?

    Read the article

  • Depth buffer values reset on change shader?

    - by bobobobo
    I have 2 different shaders, and when I change the shader (glUseProgram), it seems that the depth information is lost, because everything drawn with the 2nd shader appears completely on top of anything drawn by the first shader. If I switch the order of shader use/drawing, then it's the same (the last drawn object always appears on top of the first drawn object if there is a shader change between the 2 objects, even if the last drawn object is further away)

    Read the article

  • How to use OpenGL functions from multiples thread?

    - by Robert
    I'm writing a small game using OpenGL. I'm implementing basic networking in this game and I'm facing a problem. I have a thread in my client socket class that check for available data, when there are data I raise an event like this : immutable int len = this.m_socket.receive(data); if(len > 0) { this.m_onDataEvent(data); } Then on my game class, I have a function that handle and parse data like this : switch(msgId) { case ProtocolID.CharacterData: // Load terrain with opengl, character model.... Im not able to call opengl functions because my opengl context is created from a different thread. But I really don't know how I can solve this problem, I tried Google but it's really hard to find a solution. I'm using D programming language if it can help.

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • Behaviour tree code example?

    - by jokoon
    http://altdevblogaday.org/2011/02/24/introduction-to-behavior-trees/ Obviously the most interesting article I found on this website. What do you think about it ? It lacks some code example, don't you know any ? I also read that state machines are not very flexible compared to behaviour trees... On top of that I'm not sure if there is a true link between state machines and the state pattern... is there ?

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

  • Calculating up-vector to avoid gimbal lock using euler angles

    - by jessejuicer
    I wish to orbit a camera around a sphere, yet the problem is that when the camera rotates so that it is at the north pole (and pointing down) or the south pole (and pointing up) of the sphere the camera doesn't handle itself very well. It spins rapidly until arriving 180 degrees in the opposite direction. I believe this is known as gimbal lock. I understand you can avoid this problem using quaternions. But I also read in another forum that it's possible to avoid this easily using euler angles as well. Which I would prefer to do. It was said that all you need to do is "calculate a proper up-vector every frame, and that avoids the problem entirely." Well, I tried aligning the up-vector with the vertical axis of the camera whenever the camera changed orientation, but this didn't seem to work. Meaning that the up-vector followed exactly the orientation of the camera's y-axis (or it's up vector), instead of using a constant up-vector aligned to the up-vector of the world (0, 1, 0). How exactly do I go about calculating a proper up-vector as my camera orientation changes to avoid the gimbal lock problem mentioned above?

    Read the article

  • Undeclared Scope in Rock Paper Scissors Simple Game

    - by Rianelle
    #include <iostream> #include <string> #include <cstdlib> #include <ctime> using namespace std; bool win; int winnings; int draws; int loses; string comChoice; string playerChoice; void winGame () { cout << "You won! Play again?" <<endl; cout << "Type y/n" <<endl; char x; cin >> x; if (x == 'y') { beginGame(); } else if ('n'){ cout << "Game Stopped." <<endl; cout << "Number of Draws: " <<draws << endl; cout << "Number of Loses: " <<loses << endl; cout << "Number of Wins: " << winnings << endl; win = true; } } void drawGame (){ ++draws; cout << "Draw! Try again" << endl; return; } void lose () { cout << "You lose! Try again?" <<endl; cout << "Type y/n" <<endl; char feedback; cin >> feedback; if (feedback == 'y') { beginGame(); } else if ('n'){ cout << "Game Stopped." <<endl; cout << "Number of Draws: " <<draws << endl; cout << "Number of Loses: " <<loses << endl; cout << "Number of Wins: " << winnings << endl; } } void beginGame() { cout << "Welcome to the Rock, Paper and Scissors Game!" <<endl; cout << "Let's begin. Type <rock, paper, scissors> for your choice!" <<endl; cin >> playerChoice; srand(time(0)); int randomizer = 1+(rand()%3); if (randomizer == 1) comChoice = "rock"; if (randomizer == 2) comChoice = "paper"; if (randomizer == 3) comChoice = "scissors"; do { if (playerChoice == comChoice) { drawGame(); } if (playerChoice == "rock" && comChoice == "paper") ++loses; lose(); if (playerChoice == "rock" && comChoice == "scissors") ++winnings; winGame(); if (playerChoice == "paper" && comChoice == "rock") ++winnings; winGame(); if (playerChoice == "paper" && comChoice == "scissors") ++loses; lose(); if (playerChoice == "scissors" && comChoice == "rock") ++loses; lose(); if (playerChoice == "scissors" && comChoice == "paper") ++winnings; winGame(); }while (win != true); } int main () { beginGame(); return 0; }

    Read the article

  • Beginner C# image loading woes - NullReferenceException

    - by Seth Taddiken
    I keep getting a "NullReferenceExeption was unhandled" with "Object reference not set to an instance of an object." written under it. I have all of the images (png) correct with names and added to references. protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); backGround = Content.Load<Texture2D>("Cracked"); player1.playerBlock = Content.Load<Texture2D>("square"); player2.playerBlock = Content.Load<Texture2D>("square2"); }

    Read the article

  • Game has noticeable frame drops but when through a profiler it always runs smooth

    - by felipedrl
    I'm trying to optimize my PC game but I can find the bottleneck since every time I run it through a profiler (gDEBugger) it runs smooths. When running outside gDEBugger I get these annoying hiccups. It's not just the graphics, the sound also gets choppy. The drops are inconsistent across runs, i.e, sometimes I run the same scenario and get no drops at all, sometimes I get a few drops, and others the game is consistently slow. The only constant is: when running through gDEBugger I ALWAYS get a smooth run. I'm suspecting something outside my game is interfering and causing these drops, but what in the hell does gDEBugger do that nullifies these drops? A higher process priority? Any ideas? Thanks in advance.

    Read the article

  • Problem animating in Unity/Orthello 2D. Can't move gameObject

    - by Nelson Gregório
    I have a enemy npc that moves left and right in a corridor. It's animated with 2 sprites using Orthello 2D Framework. If I untick the animation's play on start and looping, the npc moves correctly. If I turn it on, the npc tries to move but is pulled back to his starting position again and again because of the animation loop. If I turn looping off during runtime, the npc moves correctly again. What did I do wrong? Here's the npc code if needed. using UnityEngine; using System.Collections; public class Enemies : MonoBehaviour { private Vector2 movement; public float moveSpeed = 200; public bool started = true; public bool blockedRight = false; public bool blockedLeft = false; public GameObject BorderL; public GameObject BorderR; void Update () { if (gameObject.transform.position.x < BorderL.transform.position.x) { started = false; blockedRight = false; blockedLeft = true; } if (gameObject.transform.position.x > BorderR.transform.position.x) { started = false; blockedLeft = false; blockedRight = true; } if(started) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedRight && !started && blockedLeft) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedLeft && !started && blockedRight) { movement = new Vector2(-1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } } }

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >