Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 399/1016 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • How can I do Mouse Selection In OpenGL 3.0?

    - by NoobScratcher
    Hello I'm pretty good programmer I've made my own 2D games in SDL and made a gui in 3D using Old OpenGL and Modern OpenGL but.. I'm having problems with trying to click 3D models with opengl I have no idea what to do too be honest. Do I read the area that I've clicked? or what do I do? 100% shore this has been asked before but I just don't know what to do...?? using : OpenGL 3.0 WIN32 API C++

    Read the article

  • Serverside memory efficiency and threading for a turn based game

    - by SkeletorFromEterenia
    Im programming on a turn based war-game for some years now (along with the engine) and Im having quite a hard time at figuring out what the games server architecture should look like, since most game server architecture articles I found focus either on FPS oder MMOGs, which doesn't really fit since I want many matches with 1- 16 players on my server, with each match being played in turn based mode. My chief concern is memory usage, since the most basic approach of loading every game that is being played completely into RAM should be quite inefficient, so is there a suitable strategy for selecting only the needed bits and loading them? Another question I got is how to design the threading on the server, since I think using only a single thread could be a problem due to the fact that the game or part of it might have to be loaded from the database. I would be very happy if you could share your knowledge or point me to material on this topic.

    Read the article

  • Server-side Architecture for Online Game

    - by Draiken
    Hi, basically I have a game client that has communicate with a server for almost every action it takes, the game is in Java (using LWJGL) and right now I will start making the server. The base of the game is normally one client communicating with the server alone, but I will require later on for several clients to work together for some functionalities. I've already read how authentication server should be sepparated and I intend on doing it. The problem is I am completely inexperienced in this kind of server-side programming, all I've ever programmed were JSF web applications. I imagine I'll do socket connections for pretty much every game communication since HTML is very slow, but I still don't really know where to start on my server. I would appreciate reading material or guidelines on where to start, what architecture should the game server have and maybe some suggestions on frameworks that could help me getting the client-server communication. I've looked into JNAG but I have no experience with this kind of thing, so I can't really tell if it is a solid and good messaging layer. Any help is appreciated... Thanks !

    Read the article

  • Using orientation to calculate position on Windows Phone 7

    - by Lavinski
    I'm using the motion API and I'm trying to figure out a control scheme for the game I'm currently developing. What I'm trying to achive is for a orienation of the device to correlate directly to a position. Such that tilting the phone forward and to the left represents the top left position and back to the right would be the bottom right position. Photos to make it clearer (the red dot would be the calculated position). Forward and Left Back and Right Now for the tricky bit. I also have to make sure that the values take into account left landscape and right landscape device orientations (portrait is the default so no calculations would be needed for it). Has anyone done anything like this? Notes: I've tried using the yaw, pitch, roll and Quaternion readings. Sample: // Get device facing vector public static Vector3 GetState() { lock (lockable) { var down = Vector3.Forward; var direction = Vector3.Transform(down, state); switch (Orientation) { case Orientation.LandscapeLeft: return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(-rightAngle)); case Orientation.LandscapeRight: return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(rightAngle)); } return direction; } }

    Read the article

  • 3D Camera Problem

    - by Chris
    I allow the user to look around the scene by holding down the left mouse button and moving the mouse. The problem that I have is I can be facing one direction, I move the mouse up and the view tilts up, I move down and the view titles down. If I spin around 180 my left and right still works fine, but when I move the mouse up the view tilts down, and when I move the mouse down the view titles up. This is the code I am using, can anyone see what the problem with the logic is? var viewDir = g_math.subVector(target, g_eye); var rotatedViewDir = []; rotatedViewDir[0] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[0]) - (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[2]); rotatedViewDir[1] = viewDir[1]; rotatedViewDir[2] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[2]) + (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[0]); viewDir = rotatedViewDir; rotatedViewDir[0] = viewDir[0]; rotatedViewDir[1] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]) - (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]); rotatedViewDir[2] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]) + (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]); g_lookingDir = rotatedViewDir; var newtarget = g_math.addVector(rotatedViewDir, g_eye);

    Read the article

  • Game thread, render thread, animation/inverse kinematics, and synchronization

    - by user782220
    In a multithreaded setup with a game logic thread and a render thread, with some kind of skin mesh animation with inverse kinematics plus etc how does animation work? Does the game logic thread just update a number saying time T in the animation and then the render thread infers Who owns the skin mesh animation, the game logic thread or the render thread? How is it stored in the scene graph if it is stored there at all? When the game logic updates does it do the computation of the skin mesh animation and the computation of the inverse kinematics and then store the result directly in the scene graph or is it stored indirectly and the render thread does the computation?

    Read the article

  • How to design good & continuous tiles

    - by Mikalichov
    I have trouble designing tiles so that when assembled, they don't look like tiles, but look like an homogeneous thing. For example on the image below: even though the main part of the grass is only one tile, you don't "see" the grid; you know where it is if you look a bit carefully, but it is not obvious. Whereas when I design tiles, you can only see "oh, jeez, 64 times the same tile". A bit like on that image: (taken from a gamedev.stackexchange question, sorry; no critic about the game, but it proves my point, and actually has better tile design that what I manage) I think the main problem is that I design them so they are independent, there is no junction between two tiles if put closed to each other. I think having the tiles more "continuous" would have a smoother effect, but can't manage to do it, it seems overly complex to me. I think it is probably simpler than I think once you know how to do it, but couldn't find a tutorial on that specific point. Is there a known method to design continuous / homogeneous tiles? (my terminology might be totally wrong, don't hesitate to correct me)

    Read the article

  • problem adding bumpmap to textured gluSphere in JOGL

    - by ChocoMan
    I currently have one texture on a gluSphere that represents the Earth being displayed perfectly, but having trouble figuring out how to implement a bumpmap as well. The bumpmap resides in "res/planet/earth/earthbump1k.jpg".Here is the code I have for the regular texture: gl.glTranslatef(xPath, 0, yPath + zPos); gl.glColor3f(1.0f, 1.0f, 1.0f); // base color for earth earthGluSphere = glu.gluNewQuadric(); colorTexture.enable(); // enable texture colorTexture.bind(); // bind texture // draw sphere... glu.gluDeleteQuadric(earthGluSphere); colorTexture.disable(); // texturing public void loadPlanetTexture(GL2 gl) { InputStream colorMap = null; try { colorMap = new FileInputStream("res/planet/earth/earthmap1k.jpg"); TextureData data = TextureIO.newTextureData(colorMap, false, null); colorTexture = TextureIO.newTexture(data); colorTexture.getImageTexCoords(); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorMap.close(); } catch(IOException e) { e.printStackTrace(); System.exit(1); } // Set material properties gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T); } How would I add the bumpmap as well to the same gluSphere?

    Read the article

  • Component-wise GLSL vector branching

    - by Gustavo Maciel
    I'm aware that it usually is a BAD idea to operate separately on GLSL vec's components separately. For example: //use instrinsic functions, they do the calculation on 4 components at a time. float dot = v1.x*v2.x + v1.y * v2.y + v1.z * v2.z; //NEVER float dot = dot(v1, v2); //YES //Multiply one by one is not good too, since the ALU can do the 4 components at a time too. vec3 mul = vec3(v1.x * v2.x, v1.y * v2.y, v1.z * v2.z); //NEVER vec3 mul = v1 * v2; I've been struggling thinking, are there equivalent operations for branching? For example: vec4 Overlay(vec4 v1, vec4 v2, vec4 opacity) { bvec4 less = lessThan(v1, vec4(0.5)); vec4 blend; for(int i = 0; i < 4; ++i) { if(less[i]) blend[i] = 2.0 * v1[i]*v2[i]; else blend[i] = 1.0 - 2.0 * (1.0 - v1[i])*(1.0 - v2[i]); } return v1 + (blend-v1)*opacity; } This is a Overlay operator that works component wise. I'm not sure if this is the best way to do it, since I'm afraid these for and if can be a bottleneck later. Tl;dr, Can I branch component wise? If yes, how can I optimize that Overlay function with it?

    Read the article

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • Collision resolution - Character walking on ascendent ground

    - by marcg11
    I don't know if the solution to this problem is quite straight-foward but I really don't know how to handle collision resolution on a game where the player walks on an ascendent floor which is not flat. How can the player position itself on the y axis depend on the ground x and z (opengl coords)? What if the floor's slope is too much and the player can't go up, how do you handle that? I don't need any code, just a simple explanation would be great.

    Read the article

  • Time calculation between openGL update calls.

    - by Vijayendra
    In XNA, the system calls update and draw function with the time information. This contains information such as how much time has passed since last update was called. This makes easy to integrate time and do animation calculation accordingly. But I dont see any such mechanism in openGL. I see openGL requires programmers to have their own implementation which could be buggy or inefficient. Is there any standard (and efficient) code that demonstrate this practice in openGL?

    Read the article

  • Bullet Physic: Transform body after adding

    - by Mathias Hölzl
    I would like to transform a rigidbody after adding it to the btDiscreteDynamicsWorld. When I use the CF_KINEMATIC_OBJECT flag I am able to transform it but it's static (no collision response/gravity). When I don't use the CF_KINEMATIC_OBJECT flag the transform doesn't gets applied. So how to I transform non-static objects in bullet? DemoCode: btBoxShape* colShape = new btBoxShape(btVector3(SCALING*1,SCALING*1,SCALING*1)); /// Create Dynamic Objects btTransform startTransform; startTransform.setIdentity(); btScalar mass(1.f); //rigidbody is dynamic if and only if mass is non zero, otherwise static bool isDynamic = (mass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) colShape->calculateLocalInertia(mass,localInertia); btDefaultMotionState* myMotionState = new btDefaultMotionState(); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,colShape,localInertia); btRigidBody* body = new btRigidBody(rbInfo); body->setCollisionFlags(body->getCollisionFlags()|btCollisionObject::CF_KINEMATIC_OBJECT); body->setActivationState(DISABLE_DEACTIVATION); m_dynamicsWorld->addRigidBody(body); startTransform.setOrigin(SCALING*btVector3( btScalar(0), btScalar(20), btScalar(0) )); body->getMotionState()->setWorldTransform(startTransform);

    Read the article

  • GUI device for throwing a ball

    - by Fredrik Johansson
    The hero has a ball, which shall be thrown with accuracy in a court on iPhone/iPad. The player is seen from above, in a 2D view. In game play, the player reach is between 1/15 and 1/6 of the height of the iPhone screen. The player will run, and try to outmaneuver his opponent, and then throw the ball at a specific location, which is guarded by the opponent (which is also shown on the screen). The player is controlled by a joystick, and that works ok, but how shall I control the stick? Maybe someone can propose a third control method? I've tried the following two approaches: Joystick: Hero has a reach of 1 meter, and this reach is marked with a semi-opaque circle around the player. The ball can be moved by a joystick. When the joystick is moved south, the ball is moved south within the reach circle. There is a direct coupling with the joystick and the position of the ball. I.e. when the joystick is moved max south, the ball is max south within the player reach. At each touch update the speed is calculated, and the Box2d ball position and ball speed are updated. NB, the ball will never be moved outside the reach as long as the player push the joystick. The ball is thrown by swiping the joystick to make the ball move, and then releasing the joystick. At release, the ball will get a smoothed speed of the joystick. Joystick Problem: The throwing accuracy gets bad, because the joystick can not be that big, and a small movement results in quite a large movement of the ball. If the user does not release before the end of the joystick maximum end point, the ball will stop, and when the user releases the joystick the speed of the ball will be zero. Bad... Touch pad A force is applied to the ball by a sweep on a touchpad. The ball is released when the sweep is ended, or when the ball is moved outside the player reach. As there is no one to one mapping between the swipe and the ball position, the precision can be improved. A large swipe can result in a small ball movement. Touch Pad Problem A touchpad is less intuitive. Users do not seem to know what to do with the touch pad. Some tap the touchpad, and then the ball just falls to the ground. As there is no one-to-one mapping, the ball can be moved outside the reach, and then it will just fall to the ground. It's a bit hard to control the ball, especially if the player also moves.

    Read the article

  • Objective C - Aggro with Images

    - by Will
    I have three UIImageViews. enemy1, enemy1AggroBox and mainSprite. What I want to do is when mainSprite and enemy1AggroBox interect, I want enemy1 to start moving towards mainSprite. Basically creating aggro for a game. if(CGRectIntersectsRect(mainSprite.frame, enemy1AggroBox.frame)){ //Code here// } My plan would be to call this method in viewDidLoad. I'm not using any sort of framework like cocos2d or OpenGLES. If you need to see any more code just ask.

    Read the article

  • Board Game Design in Cocos2d

    - by object2.0
    Hi folks i am going to start a chess like board game. and for that i have reviewed a number to things available. one is http://www.mapeditor.org/ , using which you can create a grid base games. another option is geekgameboard for iphone available at http://mooseyard.lighthouseapp.com/projects/23201-geekgameboard now i want your expert opinion that would it be better to make a game in cocos2d using the first option or the second option? both looks promising to me and give good control over board design. ps: sorry for duplicates, i found about the http://gamedev.stackexchange.com/ lately after posting it on stackexchange. so i am just posting it here again as i feel its more relevant board.

    Read the article

  • Looking for a simple web interface with subversion support and ticket /issue tracker [closed]

    - by Stefan Andre Brannfjell
    I am working on a small project and we have a few programmers on the job. We are using subversion to commit updates and keep all developers up to date on their workstations. However, we have yet to find a suitable web interface to use for it. I have tried redmine, but that installation progress was extremely bothersome and advanced. Once I got it to work I found out that it was slow and did not meet my expectations. As well as it seems a bit complex for our needs. I would prefer to find a solution that supports lighttpd web server, however that seem to be very hard to come by, those I have found seem to only have apache support. Functionality i wish for the website: - login to an svn account - view svn logs - view & create issues, todo list etc - view svn difference Do you have any open source recommendations that I can try out? I will appreciate any kind of reply. :) Edit: I wish to host the website on our own servers.

    Read the article

  • RGB values from image into a one dimension array in c#

    - by velocityxyz
    I was wondering if there is a was a way to read rgb values from an image into a one dimensional array in C#. If it doesnt make sense, in java I would do something like this. int[] pixels; BufferedImage image = getClass().getResourceAsStream("asdfghjkl.png"); int w = image.getWidth(); int h = image.getHeight(); pixels = new int[w * h]; image.getRGB(0, 0, w, h, pixels, 0, w) ; So any help would be great, or if you can point me in the right direction, that'd be great

    Read the article

  • libgdx - removing the circle outline rendered on Box2d CircleShape

    - by Brett
    How can I remove the outline on the circleshape below.. CircleShape circle = new CircleShape(); circle.setRadius(1f); ... using ... batch.draw(textureRegion, position.x - 1, position.y - 1, 1f, 1f, 2, 2, 1, 1, angle); I use this to set the body for a Box2d collision but I get a silly circle shape around my texture in libGdx, i.e. my textured sprite (ball) has a circle over the top of it with a line running from center along the radius. Any ideas on how to remove the overlying circle lines?

    Read the article

  • Lighting-Reflectance Models & Licensing Issues

    - by codey
    Generally, or specifically, is there any licensing issue with using any of the well known lighting/reflectance models (i.e. the BRDFs or other distribution or approximation functions): Phong, Blinn–Phong, Cook–Torrance, Blinn-Torrance-Sparrow, Lambert, Minnaert, Oren–Nayar, Ward, Strauss, Ashikhmin-Shirley and common modifications where applicable, such as: Beckmann distribution, Blinn distribution, Schlick's approximation, etc. in your shader code utilised in a commercial product? Or is it a non-issue?

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • How can I perform 2D side-scroller collision checks in a tile-based map?

    - by bill
    I am trying to create a game where you have a player that can move horizontally and jump. It's kind of like Mario but it isn't a side scroller. I'm using a 2D array to implement a tile map. My problem is that I don't understand how to check for collisions using this implementation. After spending about two weeks thinking about it, I've got two possible solutions, but both of them have some problems. Let's say that my map is defined by the following tiles: 0 = sky 1 = player 2 = ground The data for the map itself might look like: 00000 10002 22022 For solution 1, I'd move the player (the 1) a complete tile and update the map directly. This make the collision easy because you can check if the player is touching the ground simply by looking at the tile directly below the player: // x and y are the tile coordinates of the player. The tile origin is the upper-left. if (grid[x][y+1] == 2){ // The player is standing on top of a ground tile. } The problem with this approach is that the player moves in discrete tile steps, so the animation isn't smooth. For solution 2, I thought about moving the player via pixel coordinates and not updating the tile map. This will make the animation much smoother because I have a smaller movement unit per frame. However, this means I can't really accurately store the player in the tile map because sometimes he would logically be between two tiles. But the bigger problem here is that I think the only way to check for collision is to use Java's intersection method, which means the player would need to be at least a single pixel "into" the ground to register collision, and that won't look good. How can I solve this problem?

    Read the article

  • Strange rendering in XNA/Monogame

    - by Gerhman
    I am trying to render G-Code generated for a 3d-printer as the printed product by reading the file as line segments and the drawing cylinders with the diameter of the filament around the segment. I think I have managed to do this part right because the vertex I am sending to the graphics device appear to have been processed correctly. My problem I think lies somewhere in the rendering. What basically happens is that when I start rotating my model in the X or Y axis then it renders perfectly for half of the rotation but then for the other half it has this weird effect where you start seeing through the outer filament into some of the shapes inside. This effect is the strongest with X rotations though. Here is a picture of the part of the rotation that looks correct: And here is one that looks horrible: I am still quite new to XNA and/Monogame and 3d programming as a whole. I have no idea what could possibly be causing this and even less of an idea of what this type of behavior is called. I am guessing this has something to do with rendering so have added the code for that part: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; basicEffect.VertexColorEnabled = true; basicEffect.EnableDefaultLighting(); GraphicsDevice.SetVertexBuffer(vertexBuffer); RasterizerState rasterizerState = new RasterizerState(); rasterizerState.CullMode = CullMode.CullClockwiseFace; rasterizerState.ScissorTestEnable = true; GraphicsDevice.RasterizerState = rasterizerState; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, vertexBuffer.VertexCount); } base.Draw(gameTime); } I don't know if it could be because I am shading something that does not really have a texture. I am using this custom vertex declaration I found on some tutorial that allows me to store a vertex with a position, color and normal: public struct VertexPositionColorNormal { public Vector3 Position; public Color Color; public Vector3 Normal; public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); } If any of you have ever seen this type of thing please help. Also, if you think that the problem might lay somewhere else in my code then please just request what part you would like to see in the comments section.

    Read the article

  • Rendering Unity across multiple monitors

    - by N0xus
    At the moment I am trying to get unity to run across 2 monitors. I've done some research and know that this is, strictly, possible. There is a workaround where you basically have to fluff your window size in order to get unity to render across both monitors. What I've done is create a new custom screen resolution that takes in the width of both of my monitors, as seen in the following image, its the 3840 x 1080: How ever, when I go to run my unity game exe that size isn't available. All I get is the following: My custom size should be at the very bottom, but isn't. Is there something I haven't done, or missed, that will get unity to take in my custom screen size when it comes to running my game through its exe? Oddly enough, inside the unity editor, my custom screen size is picked up and I can have it set to that in my game window: Is there something that I have forgotten to do when I build and run the game from the file menu? Has someone ever beaten this issue before?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >