Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 562/1022 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • RPG Equipped Item System

    - by Jimmt
    I'm making a 2d rpg with libgdx and java. I have an Inventory class with an Array of Items, and now I want to be able to equip items onto the player. Would it be more managable to do have every item have an "equipped" boolean flag have an "equipped" array in the player class have individual equipped fields in player class, e.g. private Item equippedWeapon; private Item equippedArmor; public void equipWeapon(Item weapon){ equippedWeapon = weapon; } Or just another way completely? Thanks.

    Read the article

  • glutPostRedisplay() does not update display

    - by A D
    I am currently drawing a rectangle to the screen and would like to move it by using the arrow keys. However, when I press an arrow key the vertex data changes but the display does refresh to reflect these changes, even though I am calling glutPostRedisplay(). Is there something else that I must do? My code: #include <GL/glew.h> #include <GL/freeglut.h> #include <GL/freeglut_ext.h> #include <iostream> #include "Shaders.h" using namespace std; const int NUM_VERTICES = 6; const GLfloat POS_Y = -0.1; const GLfloat NEG_Y = -0.01; struct Vertex { GLfloat x; GLfloat y; Vertex() : x(0), y(0) {} Vertex(GLfloat givenX, GLfloat givenY) : x(givenX), y(givenY) {} }; Vertex left_paddle[NUM_VERTICES]; void init() { glClearColor(1.0f, 1.0f, 1.0f, 0.0f); left_paddle[0] = Vertex(-0.95f, 0.95f); left_paddle[1] = Vertex(-0.95f, 0.0f); left_paddle[2] = Vertex(-0.85f, 0.95f); left_paddle[3] = Vertex(-0.85f, 0.95f); left_paddle[4] = Vertex(-0.95f, 0.0f); left_paddle[5] = Vertex(-0.85f, 0.0f); GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); GLuint buffer; glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(left_paddle), NULL, GL_STATIC_DRAW); GLuint program = init_shaders( "vshader.glsl", "fshader.glsl" ); glUseProgram( program ); GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindVertexArray(vao); } void movePaddle(Vertex* array, GLfloat change) { for(int i = 0; i < NUM_VERTICES; i++) { array[i].y = array[i].y + change; } glutPostRedisplay(); } void special( int key, int x, int y ) { switch ( key ) { case GLUT_KEY_DOWN: movePaddle(left_paddle, NEG_Y); break; } } void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 6); glutSwapBuffers(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(500,500); glutCreateWindow("Rectangle"); glewInit(); init(); glutDisplayFunc(display); glutSpecialFunc(special); glutMainLoop(); return 0; }

    Read the article

  • CreateDXGIFactory Doesn't Let Program Exit

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11.

    Read the article

  • Player position triggering teleports

    - by jSherz
    I'm developing a Minecraft plugin (bukkit) in which a server admin can create 'portals' - a small region that will teleport any players who enter it. I have the teleportation sorted and I know how I could define areas that the player's position could be tested against. This would involve an ArrayList containing the zones and then hooking the PlayerMoveEvent so that the ArrayList is searched each time for a matching portal region. Although this method would work, I doubt that it would be very efficient when 100+ players are all moving around at the same time. Is there a better way of checking a player position against a set of 'zones' / regions?

    Read the article

  • Reading the memory of a N64

    - by toazron1
    I'm looking for a way to read the memory of a N64, while the game is running, in real time. I have a c# program which hooks into the memory of a running emulator and tracks SSB64 stats. I want to do the same thing with the physical N64. Currently it is possible to read the memory with a gameshark pro, however it's extremely slow, buggy, and not practical for what I am trying to accomplish. Would it be possible to tap into the gameshark, or the N64 directly, to access the memory in real time? Thanks!

    Read the article

  • Existent js libs for tileset / map loading and rendering?

    - by ylluminate
    I'm building an rts style overhead tileset game with JavaScript (particularly using Ember.js framework as a base). The map is so large that I'd very much like to be able to load and render the board and layered items in a Google Maps'esque. I'm curious as to whether there are existing libs that would be helpful and already well thought out in these regards vs trying to reinvent the wheel. Are there any such libraries or code examples that would be useful in this area of board / map management?

    Read the article

  • Translating multiple objects in GUI based on average position?

    - by user1423893
    I use this method to move a single object in 3D space, it accounts for a local offset based on where the cursor ray hits the widget and the center of the widget. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; objectSelected.Position = p; I would like to move multiple objects based on the same principle and using a widget which is located at the average position of all the objects currently selected. I thought that I would have to translate each object based on their offset from the average point and then include the local offset. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; int numSelectedObjects = objectSelectedList.Count; for (int i = 0; i < numSelectedObjects; ++i) { objectSelectedList[i].Position = (objectSelectedList[i].Position - translationWidget.Position) + p; } This doesn't work as the object starts shaking, which I think is because I haven't accounted for the new offset correctly. Where have I gone wrong?

    Read the article

  • How can a pygame image be colored?

    - by Juicy
    I'm writing a 2d particle system for a game in Pygame[1]. For the particles, I have an image surface loaded from a file -- basically a white primitive drawn over a transparent background. I'd like the particle engine to emit variously colored particles, but I'm not sure how to tell Pygame to color the surface. I've looked through what passes for documentation, but I'm having trouble finding anything. [1] Yeah, I don't really like Pygame, but my course insists I write this project in Python.

    Read the article

  • Possible to map mouse coordinates to isometric tiles with this coordinate system?

    - by plukich
    I'm trying to implement mouse interaction in a 2d isometric game, but I'm not sure if it's possible given the coordinate system used for tile maps in the game. I've read some helpful articles like this one: How to convert mouse coordinates to isometric indexes? However, this game's coordinate system is "jagged" for lack of a better word, and looks like this: Is it even possible to map mouse coordinates to this successfully, since the y-axis can't be drawn on this tile-map as a straight line? I've thought about doing odd-y-value translations and even-y-value translations with two different matricies, but that only makes sense going from tile-screen.

    Read the article

  • Unity, Unrealistic Sphere On Inclined Plane

    - by user1086516
    So I am trying to model a ball rolling down an inclined surface in Unity based on what I am observing in real life but it is still quite off. In Unity it takes the ball about 3 seconds to travel from a place to another specified place where in real life it only takes 1 second. The ball isn't as fast to react to the incline as in real life (even though I have tried giving the ball and surface low or zero friction values) The ball does not accelerate as nearly as fast as it does in real life What do I do to give the ball more realistic behavior ? I have tried messing around with mass, physics materials, drag, and angular drag on the ball and surface but it doesn't seem to be helping.

    Read the article

  • background animation algorithm for single screen

    - by becool_max
    I’m writing simple strategy game (in xna), and would like to have an animated background. In my game all the actions happens inside one screen and thus standard parallax effect does not look appropriate. However, I found a video of a game with suitable background animation for my game http://www.youtube.com/watch?v=Vcxdbjulf90&feature=share&list=PLEEF9ABAB913946E6 (from 3 to 6s, while main character stays at the same place). What is the algorithm to do this stuff? It would be nice if someone can provide a reference for a similar example (language is not important).

    Read the article

  • How to render a texture partly transparent?

    - by megamoustache
    Good Morning StackOverflow, I'm having a bit of a problem right now as I can't seem to find a way to render part of a texture transparently with openGL. Here is my setting : I have a quad, representing a wall, covered with this texture (converted to PNG for uploading purposes). Obviously, I want the wall to be opaque, except for the panes of glass. There is another plane behind the wall which is supposed to show a landscape. I want to see the landscape from behind the window. Each texture is a TGA with alpha channel. The "landscape" is rendered first, then the wall. I thought it would be sufficient to achieve this effect but apparently it's not the case. The part of the window supposed to be transparent is black and the landscape only appears when I move past the wall. I tried to fiddle with GLBlendFunc() after having enabled it but it doesn't seem to do the trick. Am i forgetting an important step ? Thank you :)

    Read the article

  • Jumping Physics

    - by CogWheelz
    With simplicity, how can I make a basic jump without the weird bouncing? It jumps like 2 pixels and back Here's what I use y += velY x += velX then keypresses MAX_SPEED = 180; falling = true; if(Gdx.input.isKeyPressed(Keys.W)) {//&& !jumped && !p.falling) { p.y += 20; } if(!Gdx.input.isKeyPressed(Keys.W)) p.velY = 0; if(Gdx.input.isKeyPressed(Keys.D)) p.velX = 5; if(!Gdx.input.isKeyPressed(Keys.D) && !(Gdx.input.isKeyPressed(Keys.A))) p.velX = 0; if(Gdx.input.isKeyPressed(Keys.A)) p.velX = -5; if(!Gdx.input.isKeyPressed(Keys.A) && !(Gdx.input.isKeyPressed(Keys.D))) p.velX = 0; if(p.falling == true || p.jumping == true) { p.velY -= 2; } if(p.velY > MAX_SPEED) p.velY = MAX_SPEED; if(p.velX > MAX_SPEED) p.velX = MAX_SPEED;

    Read the article

  • (Phaser) Preload Future States in Create?

    - by Brian
    I'm a first time user of Phaser, been trying to make a simple point and click type game. I'm trying to keep things very modular, so I'm defining a list of levels (states) in a JSON, and then every level has its own JSON containing the objects within that level. However, I'm encountering an issue in that, when changing states, I get a black flash while the assets for the next state load (this happens whether I iterate through the JSON list or define everything manually). From what I've read, all sprites should be loaded in the preload stage, however, by doing this I'm causing that tiny but noticeable black pause. I know one way would be to simply load every asset at the start of the game, but that seems incredibly inefficient (wouldn't that fill up the memory immensely?). I would rather load a state's assets from the "parent" state. However, in my quick test (which maybe I did wrong) it seems that game.load doesn't work properly if done within the create stage? What is the best approach to doing this?

    Read the article

  • Size of an image imported with FreeImage

    - by KaiserJohaan
    I'm having abit of a brainfart and I can't quite grasp what I'm doing wrong. It's quite simple, I am importing an image with FreeImage (http://freeimage.sourceforge.net/) which has a method FreeImage_GetBits that returns a pointer to the first byte of the image data. I then try to load all the data into memory using (bitsperpixel / 8) * pixelsWidth ' pixelHeight, like this: uint32_t bitsPerPixel = FreeImage_GetBPP(bitmap); // resolves to 24 uint32_t widthInPixels = FreeImage_GetWidth(bitmap); // resolves to 1024 uint32_t heightInPixels = FreeImage_GetHeight(bitmap); // resolves to 1024 // container is a std::vector<uint8_t> pkgMaterial.mTextureData.insert(pkgMaterial.mTextureData.begin(), FreeImage_GetBits(bitmap), FreeImage_GetBits(bitmap) + ((bitsPerPixel/8) * widthInPixels * heightInPixels)); I have a jpg which is 31 kilobytes in size on disc. Yet when I load it using the above formula, I see the vector is then filled with 3145728 bytes, which is approx 3145 kilobytes. What am I doing wrong?

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • Confusing Callbacks

    - by SullY
    I'm trying to programm now a "game", and started with the EmptyProject that's provided by the DirectX SDK. The problem is that the Callbacks are confusing me. Can please someone explain me? Edit: DXUTSetCallbackD3D9DeviceAcceptable( IsD3D9DeviceAcceptable ); // not sure but I think that's the caps? DXUTSetCallbackD3D9DeviceLost( OnD3D9LostDevice ); DXUTSetCallbackDeviceChanging( ModifyDeviceSettings ); DXUTSetCallbackFrameMove( OnFrameMove );

    Read the article

  • How do I build a matrix to translate one set of points to another?

    - by dotminic
    I've got 3 points in space that define a triangle. I've also got a vertex buffer made up of three vertices, that also represent a triangle that I will refer to as a "model". How can I can I find the matrix M that will transform vertex in my buffer to those 3 points in space ? For example, let's say my three points A, B, C are at locations: A.x = 10, A.y = 16, A.z = 8 B.x = 12, B.y = 11, B.z = 1 C.x = 19, C.y = 12, C.z = 3 given these coordinates how can I build a matrix that will translate and rotate my model such that both triangles have the exact same world space ? That is, I want the first vertex in my triangle model to have the same coordinates as A, the second to have the same coordinates as B, and same goes for C. nb: I'm using instanced rendering so I can't just give each vertex the same position as my 3 points. I have a set of three points defining a triangle, and only three vertices in my vertex buffer.

    Read the article

  • Designing for visually impaired gamers

    - by Aku
    Globally the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind. — World Health Organisation, 2010. (That's 4.2% and 0.6% of the world population.) Most videogames put a strong emphasis on visuals in their content delivery. Visually impaired gamers are largely left out. How do I design a game to be accessible to visually impaired gamers?

    Read the article

  • I need advice on creating 3D walk cycles in XNA

    - by Zetar
    I want to purchase a number of 3D models from TurboSquid and animate them in an XNA game. I wrote a lot of games from 1985-1999 and have recently become involved with XNA. Now I would like to port one of my old games to the XBOX. I do have a background in 3D animation; but that was years ago. What is the current method for animating a walk cycle with a 3D model and using it inside XNA? Is there a book, software or a tutorial that you can recommend? Thanks in advance and sorry for such a broad and currently naive question.

    Read the article

  • C++ Parallel Asynchonous task

    - by Doodlemeat
    I am currently building a randomly generated terrain game where terrain is created automatically around the player. I am experiencing lag when the generated process is active, as I am running quite heavy tasks with post-processing and creating physics bodies. Then I came to mind using a parallel asynchronous task to do the post-processing for me. But I have no idea how I am going to do that. I have searched for C++ std::async but I believe that is not what I want. In the examples I found, a task returned something. I want the task to change objects in the main program. This is what I want: // Main program // Chunks that needs to be processed. // NOTE! These chunks are already generated, but need post-processing only! std::vector<Chunk*> unprocessedChunks; And then my task could look something like this, running like a loop constantly checking if there is chunks to process. // Asynced task if(unprocessedChunks.size() > 0) { processChunk(unprocessedChunks.pop()); } I know it's not far from easy as I wrote it, but it would be a huge help for me if you could push me at the right direction. In Java, I could type something like this: asynced_task = startAsyncTask(new PostProcessTask()); And that task would run until I do this: asynced_task.cancel();

    Read the article

  • Windows Phone 7 Networked Game

    - by Craig
    Im creating a multiplayer asteroids type game for the Windows Phone 7, 2 players can challenge each other over who will get the highest score. On each players phone the opponent is displayed and both go about shooting asteroids and enemies. In an assignment I have due I would like to talk about the packet design, what would be the least amount of info that I can send over the connection? Instead of constantly having to send each players position, asteroid position, bullet position and enemy position etc. Or would all that data constantly need to be sent?

    Read the article

  • How to move the rigidbody at the position of the mouse on release

    - by Edvin
    I'm making a "Can Knockdown" game and I need the rigidbody to move where the player released the mouse(OnMouseUp). Momentarily the Ball moves OnMouseUp because of rigidbody.AddForce(force * factor); and It moves toward the mousePosition but doesn't end up where the mousePosition is. Here's what I have so far in the script. var factor = 20.0; var minSwipeDistY : float; private var startTime : float; private var startPos : Vector3; function OnMouseDown(){ startTime = Time.time; startPos = Input.mousePosition; startPos.z = transform.position.z - Camera.main.transform.position.z; startPos = Camera.main.ScreenToWorldPoint(startPos); } function OnMouseUp(){ var endPos = Input.mousePosition; endPos.z = transform.position.z - Camera.main.transform.position.z; endPos = Camera.main.ScreenToWorldPoint(endPos); var force = endPos - startPos; force.z = force.magnitude; force /= (Time.time - startTime); rigidbody.AddForce(force * factor); }

    Read the article

  • Damageable ground similar to pocket tanks or archanists [closed]

    - by XenElement
    Possible Duplicate: Implementing a 2D destructible landscape (like Worms) A really cool feature in both the iPhone game pocket tanks and the online jagex game archanists is ground which can be blown up. When a projectile collides with the ground, an area equal to the blast radius which overlaps the ground is removed. It's strictly two dimensional, but it makes the experience that much more dynamic since you can dig a hole under your opponents or yourself. How is this implemented?

    Read the article

  • Rotating object around moving object/player in 2D

    - by Boston
    I am trying to implement a camera which rotates around the world around the player. I have found many solutions online to the task of rotating an object about the origin, or about an arbitrary point. The procedure seems to be to translate the point to be rotated about to the origin, perform the rotation, translate back, then draw. I have gotten this working for rotation around the origin as well as for a fixed point. Rotation of objects around the player works as well, provided the player does not move. However, if the objects are rotated around the player by some non-zero degree, if the player moves after the rotation it causes the rotated objects to move as well. I probably have done a poor job explaining this so here's an image: http://i.imgur.com/1n63iWR.gif And here's the code for the behavior: renderx = (Ox - Px)*cos(camAngle) - (Oy - Py)*sin(camAngle) + Px; rendery = (Ox - Px)*sin(camAngle) + (Oy - Py)*cos(camAngle) + Py; Where (Ox,Oy) is the actual position of the object to be rotated and (Px,Py) is the actual position of the player. Any ideas? I am using C++ with SDL2.0.

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >