Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 548/1042 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Android - Multiplayer Game - Client / Server - Java etc

    - by user1405328
    I must write a multiplayer pong game for the school. Where are thousand of rooms and where two players can go in to a room and play together and collect points. I programmed the Pong game using Java (LibGDX). How can I do the Network part. I searched the web. And I came across Kryonet. Is there something better? What should I google. On the Internet there are a lot of those questions. And no good answers. I hope that this most questions can be answered. If someone has actual Open Source network game links, tutorials, books, network tutorial, etc. All this would help everyone. Thank you.

    Read the article

  • How can I render multiple windows with DirectX 9 in C++?

    - by Friso1990
    I'm trying to render multiple windows, using DirectX 9 and swap chains, but even though I create 2 windows, I only see the first one that I've created. My RendererDX9 header is this: #include <d3d9.h> #include <Windows.h> #include <vector> #include "RAT_Renderer.h" namespace RAT_ENGINE { class RAT_RendererDX9 : public RAT_Renderer { public: RAT_RendererDX9(); ~RAT_RendererDX9(); void Init(RAT_WindowManager* argWMan); void CleanUp(); void ShowWin(); private: LPDIRECT3D9 renderInterface; // Used to create the D3DDevice LPDIRECT3DDEVICE9 renderDevice; // Our rendering device LPDIRECT3DSWAPCHAIN9* swapChain; // Swapchain to make multi-window rendering possible WNDCLASSEX wc; std::vector<HWND> hwindows; void Render(int argI); }; } And my .cpp file is this: #include "RAT_RendererDX9.h" static LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ); namespace RAT_ENGINE { RAT_RendererDX9::RAT_RendererDX9() : renderInterface(NULL), renderDevice(NULL) { } RAT_RendererDX9::~RAT_RendererDX9() { } void RAT_RendererDX9::Init(RAT_WindowManager* argWMan) { wMan = argWMan; // Register the window class WNDCLASSEX windowClass = { sizeof( WNDCLASSEX ), CS_CLASSDC, MsgProc, 0, 0, GetModuleHandle( NULL ), NULL, NULL, NULL, NULL, "foo", NULL }; wc = windowClass; RegisterClassEx( &wc ); for (int i = 0; i< wMan->getWindows().size(); ++i) { HWND hWnd = CreateWindow( "foo", argWMan->getWindow(i)->getName().c_str(), WS_OVERLAPPEDWINDOW, argWMan->getWindow(i)->getX(), argWMan->getWindow(i)->getY(), argWMan->getWindow(i)->getWidth(), argWMan->getWindow(i)->getHeight(), NULL, NULL, wc.hInstance, NULL ); hwindows.push_back(hWnd); } // Create the D3D object, which is needed to create the D3DDevice. renderInterface = (LPDIRECT3D9)Direct3DCreate9( D3D_SDK_VERSION ); // Set up the structure used to create the D3DDevice. Most parameters are // zeroed out. We set Windowed to TRUE, since we want to do D3D in a // window, and then set the SwapEffect to "discard", which is the most // efficient method of presenting the back buffer to the display. And // we request a back buffer format that matches the current desktop display // format. D3DPRESENT_PARAMETERS deviceConfig; ZeroMemory( &deviceConfig, sizeof( deviceConfig ) ); deviceConfig.Windowed = TRUE; deviceConfig.SwapEffect = D3DSWAPEFFECT_DISCARD; deviceConfig.BackBufferFormat = D3DFMT_UNKNOWN; deviceConfig.BackBufferHeight = 1024; deviceConfig.BackBufferWidth = 768; deviceConfig.EnableAutoDepthStencil = TRUE; deviceConfig.AutoDepthStencilFormat = D3DFMT_D16; // Create the Direct3D device. Here we are using the default adapter (most // systems only have one, unless they have multiple graphics hardware cards // installed) and requesting the HAL (which is saying we want the hardware // device rather than a software one). Software vertex processing is // specified since we know it will work on all cards. On cards that support // hardware vertex processing, though, we would see a big performance gain // by specifying hardware vertex processing. renderInterface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwindows[0], D3DCREATE_SOFTWARE_VERTEXPROCESSING, &deviceConfig, &renderDevice ); this->swapChain = new LPDIRECT3DSWAPCHAIN9[wMan->getWindows().size()]; this->renderDevice->GetSwapChain(0, &swapChain[0]); for (int i = 0; i < wMan->getWindows().size(); ++i) { renderDevice->CreateAdditionalSwapChain(&deviceConfig, &swapChain[i]); } renderDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); // Set cullmode to counterclockwise culling to save resources renderDevice->SetRenderState(D3DRS_AMBIENT, 0xffffffff); // Turn on ambient lighting renderDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // Turn on the zbuffer } void RAT_RendererDX9::CleanUp() { renderDevice->Release(); renderInterface->Release(); } void RAT_RendererDX9::Render(int argI) { // Clear the backbuffer to a blue color renderDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 255 ), 1.0f, 0 ); LPDIRECT3DSURFACE9 backBuffer = NULL; // Set draw target this->swapChain[argI]->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backBuffer); this->renderDevice->SetRenderTarget(0, backBuffer); // Begin the scene renderDevice->BeginScene(); // End the scene renderDevice->EndScene(); swapChain[argI]->Present(NULL, NULL, hwindows[argI], NULL, 0); } void RAT_RendererDX9::ShowWin() { for (int i = 0; i < wMan->getWindows().size(); ++i) { ShowWindow( hwindows[i], SW_SHOWDEFAULT ); UpdateWindow( hwindows[i] ); // Enter the message loop MSG msg; while( GetMessage( &msg, NULL, 0, 0 ) ) { if (PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } else { Render(i); } } } } } LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { switch( msg ) { case WM_DESTROY: //CleanUp(); PostQuitMessage( 0 ); return 0; case WM_PAINT: //Render(); ValidateRect( hWnd, NULL ); return 0; } return DefWindowProc( hWnd, msg, wParam, lParam ); } I've made a sample function to make multiple windows: void RunSample1() { //Create the window manager. RAT_ENGINE::RAT_WindowManager* wMan = new RAT_ENGINE::RAT_WindowManager(); //Create the render manager. RAT_ENGINE::RAT_RenderManager* rMan = new RAT_ENGINE::RAT_RenderManager(); //Create a window. //This is currently needed to initialize the render manager and create a renderer. wMan->CreateRATWindow("Sample 1 - 1", 10, 20, 640, 480); wMan->CreateRATWindow("Sample 1 - 2", 150, 100, 480, 640); //Initialize the render manager. rMan->Init(wMan); //Show the window. rMan->getRenderer()->ShowWin(); } How do I get the multiple windows to work?

    Read the article

  • How to attach an object to a rotating circle in box2d cocos2d?

    - by armands
    I am trying to make an object get attached on a collision point to a circle that is rotating, but the player needs to get attached with a constant point on the player. For example the player is moving back and forth and when the user touches the screen and the player jumps up but what I need is that when the player collides with the circle it attaches it's legs to it and continues rotating with the circle. So I wanted to know how to make this kind of collision joint in cocos2d box2d?

    Read the article

  • Re-sizing the form without scaling the GUI

    - by Bmoore
    I am writing a turn based strategy game in C#. My GUI implementation consists of class that extends Form containing a class that extends Panel. When I render the GUI I draw to the paint method in the panel. I am trying to figure out what is the best way for handling form re-size events. I know I want a minimum window size, but I would prefer to not have a maximum or a set size. Ideally the GUI would reveal more/less of the map as the user changes the window size. I would like to avoid scaling the graphics if at all possible. What is the best way to handle re-size events?

    Read the article

  • How to texture voxel terrain without triplanar texturing?

    - by Thelvyn
    How can a voxel terrain (marching cubes) be textured without triplanar mapping ? The goal being to have more artistic freedom. I think, I could unwrap the mesh while extracting the isosurface then use projective painting. But I do not know how to handle terrain modifications without breaking the texture. I also guess that virtual texturing could help here. Links for these matters would be appreciated.

    Read the article

  • 2-components color model

    - by Cyan
    RGB is the natural color model for OpenGL. But a lot of other color models exist. For example, CMY(K) for printers, YUV for JPEG, the little cousins YCbCr and YCoCg, HSL & HSV from the 70's, and so on. All these models tend to share a common property : they are based on 3 components. Therefore my question is : Does it exist a 2-components color model ? I'm surprised to not find any. I was expecting something along the line of Hue+light could exist. I guess it cannot be as "complete" as a true 3-components color model, but a fine-enough approximation will be good for my usecase. The end objective is to store the 2 components into a single BC5 texture (GL_COMPRESSED_RED_GREEN_RGTC2 in OpenGL). The 3rd component requires a second fetch into a second texture, which hurts performance.

    Read the article

  • How can I keep track of a battle log on a web game?

    - by Jay W
    Recently I started working on a Web turn-based PvP RPG game. Now I'm working on the battle system but I encountered some issues: How can I keep track of everything that happens in the battle? It should keep track of the characters on the field, inventory, the damage done etc. I first thought I would simply put it in the (MySQL) database, but I think it will be too much. Especially if several people are in a battle. I thought of puting this in sessions or cookies but I don't think thats reliable. Does anyone have an idea how I can do this?

    Read the article

  • How to syncronize two animations without delays

    - by GeKi
    I have one character idle animation running inside a game in a loop, over and over again. A a certain time I trigger another animation to be played, for the same character. The second animation won't play immediately, as will be a discontinuity in my character animation. First I wait for the idle animation to finish and then I play my second animation. Now I have a smooth, continuous animation, BUT I have introduced a delay between my action and character animation. If I play the second animation right away as it is triggered, the character animation won't be continuous and smooth. I was thinking on breaking the idle animation in small pieces and also to have the same number of second action animations to match the last frame of the idle pieces. This won't solve the delay completely, only will minimize it a bit. So it's a magic formula of how can I get rid of this delay? Thanks.

    Read the article

  • AndEngine player, background and camera

    - by valdemar593
    I'm developing a 2D shooter using AndEngine. At the moment I'm trying to make the camera follow the player. As I've understood the common approach is to use the SmoothCamera zooming it and setting the chased entity. The problem is that the camera follows the player WITH background moving also (RepeatingSpriteBackground), so it looks like the player doesn't move at all though the actual position changes. So I don't really get how to make the camera follow the player and have the background not moving. Thanks in advance.

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • Deep Cloning C++ class that inherits CCNode in Cocos2dx

    - by A Devanney
    I stuck with something in Cocos2dx ... I'm trying to deep clone one of my classes that inherits CCNode. Basically i have.... GameItem* pTemp = new GameItem(*_actualItem); // loops through all the blocks in gameitem and updates their position pTemp->moveDown(); // if in boundary or collision etc... if (_gameBoard->isValidMove(pTemp)) { _actualItem = pTemp; // display the position CCLog("pos (1) --- (X : %d,Y : %d)", _actualItem->getGridX(),_actualItem->getGridY()); } Then doesn't work, because the gameitem inherits CCNode and has the collection of another class that also inherits CCNode. its just creating a shallow copy and when you look at children of the gameitem node in the copy, just point to the original? class GameItem : public CCNode { // maps to the actual grid position of the shape CCPoint* _rawPosition; // tracks the current grid position int _gridX, _gridY; // tracks the change if the item has moved CCPoint _offset; public: //constructors GameItem& operator=(const GameItem& item); GameItem(Shape shape); ... } then in the implementation.... GameItem& GameItem::operator=(const GameItem& item) { _gridX = item.getGridX(); _gridY = item.getGridY(); _offset = item.getOffSet(); _rawPosition = item.getRawPosition(); // how do i copy the node? return *this; } // shape contains an array of position for the game character GameItem::GameItem(Shape shape) { _rawPosition = shape.getShapePositions(); //loop through all blocks in position for (int i = 0; i < 7; i++) { // get the position of the first block in the shape and add to the position of the first block int x = (int) (getRawPosition()[i].x + getGridX()); int y = (int) (getRawPosition()[i].y + getGridY()); //instantiate a block with the position and type Block* block = Block::blockWithFile(x,y,(i+1), shape); // add the block to the this node this->addChild(block); } } And for clarity here is the block class class Block : public CCNode{ private: // using composition over inheritance CCSprite* _sprite; // tracks the current grid position int _gridX, _gridY; // used to store actual image number int _blockNo; public: Block(void); Block(int gridX, int gridY, int blockNo); Block& operator=(const Block& block); // static constructor for the creation of a block static Block* blockWithFile(int gridX, int gridY,int blockNo, Shape shape); ... } The blocks implementation..... Block& Block::operator=(const Block& block) { _sprite = new CCSprite(*block._sprite); _gridX = block._gridX; _gridY = block._gridY; _blockNo = block._blockNo; //again how to clone CCNode? return *this; } Block* Block::blockWithFile(int gridX, int gridY,int blockNo, Shape shape) { Block* block = new Block(); if (block && block->initBlockWithFile(gridX, gridY,blockNo, shape)) { block->autorelease(); return block; } CC_SAFE_DELETE(block); return NULL; } bool Block::initBlockWithFile(int gridX, int gridY,int blockNo, Shape shape) { setGridX(gridX); setGridY(gridY); setBlockNo(blockNo); const char* characterImg = helperFunctions::Format(shape.getFileName(),blockNo); // add to the spritesheet CCTexture2D* gameArtTexture = CCTextureCache::sharedTextureCache()->addImage("Character.pvr.ccz"); CCSpriteBatchNode::createWithTexture(gameArtTexture); // block settings _sprite = CCSprite::createWithSpriteFrameName(characterImg); // set the position of the block and add it to the layer this->setPosition(CONVERTGRIDTOACTUALPOS_X_Y(gridX,gridY)); this->addChild(_sprite); return true; } Any ideas are welcome at this point!! thanks

    Read the article

  • What are some ways of making manageable complex AI?

    - by Tetrad
    In the past I've used simple systems like finite state machines (FSMs) or hierarchical FSMs to control AI behavior. For any complex system, this pattern falls apart very quickly. I've heard about behavior trees and it seems like that's the next obvious step, but haven't seen a working implementation or really tried going down that route yet. Are there any other patterns to making manageable yet complex AI behaviors?

    Read the article

  • which is better performance, using a disposable local variable or reusing a global one?

    - by petervaz
    This is for an android game. Suppose I have a function that is called several times for second and do some calculations involving an arraylist (or any other complex objects for what matter). Which approach would be preffered? local: private void doStuff(){ ArrayList<Type> XList = new ArrayList<Type>(); // do stuff with list } global: private ArrayList<Type> XList = new ArrayList<Type>(); private void doStuff(){ XList.clear(); // do stuff with list }

    Read the article

  • Syncing properties across a game server

    - by Vaughan Hilts
    I'm beginning to implement a simple scripting system into my networked server, and I've hit a snag. Before, I've been wrapping my calls into functions on objects that manipulate objects, but lately I've been finding this to be a pain for simple things. For example, if I set 'player.HP = 1'.. this works server-side. But the player side never sees this change unless I explicitly send a packet to inform the client. For many things like map swapping that require more complicated changes, like change X, Y, Map and do this.. I have a function. That's fine. But what about these small properties I want to sync?

    Read the article

  • Tips on how to notify a user of new features in your game (Android)

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • Andengine. Put bullet to pool, when it leaves screen

    - by Ashot
    i'm creating a bullet with physics body. Bullet class (extends Sprite class) has die() method, which unregister physics connector, hide sprite and put it in pool public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } in onUpdate method of PhysicsConnector i executes die method, when sprite leaves screen physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); _bullet.die(); } } }; it works as i expected, but _bullet.die() executes TWICE. what i`m doing wrong and is it right way to hide sprites? here is full code of Bullet class (it is inner class of class that represents player) private class Bullet extends Sprite implements PhysicsConstants { private final Body body; private final PhysicsConnector physicsConnector; private final Bullet _bullet; private int id; public Bullet(float x, float y, ITextureRegion texture, VertexBufferObjectManager vertexBufferObjectManager) { super(x,y,texture,vertexBufferObjectManager); _bullet = this; id = bulletId++; body = PhysicsFactory.createCircleBody(mPhysicsWorld, this, BodyDef.BodyType.DynamicBody, bulletFixture); physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); Log.d("bulletDie",id+""); _bullet.die(); } } }; mPhysicsWorld.registerPhysicsConnector(physicsConnector); $this.getParent().attachChild(this); } public void reset() { final float angle = canon.getRotation(); final float x = (float) ((Math.cos(MathUtils.degToRad(angle))*radius) + centerX) / PIXEL_TO_METER_RATIO_DEFAULT; final float y = (float) ((Math.sin(MathUtils.degToRad(angle))*radius) + centerY) / PIXEL_TO_METER_RATIO_DEFAULT; this.setVisible(true); this.setIgnoreUpdate(false); body.setActive(true); mPhysicsWorld.registerPhysicsConnector(physicsConnector); body.setTransform(new Vector2(x,y),0); } public Body getBody() { return body; } public void setLinearVelocity(Vector2 velocity) { body.setLinearVelocity(velocity); } public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } }

    Read the article

  • XNA 3D coordinates seem off

    - by Peteyslatts
    I'm going through a book, and the example it gave me seems like is should work, but when I try and implement it, it falls short. My Camera class takes three vectors in to generate View and Projection matrices. I'm giving it a position vector of (0,0,5), a target vector of Vector.Zero and a top vector (which way is up) of Vector.Up. My Three vertices are placed at (0,1,0), (-1,-1,0), (1,-1,0). It seems like it should work because the vertices are centered around the origin, and thats where I'm telling the camera to look but when I run the game, the only way to get the camera to see the vertices is to set its position to (0,0,-5) and even then the triangle is skewed. Not sure what's wrong here. Any suggestions would be helpful. Just to make sure I've given you guys everything (I don't think these are important as the problem seems to be related to the coordinates, not the ability of the game to draw them): I'm using a VertexBuffer and a BasicEffect. My render code is as follows: effect.World = Matrix.Identity; effect.View = camera.view; effect.Projection = camera.projection; effect.VertexColorEnabled = true; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserPrimitives<VertexPositionColor> (PrimitiveType.TriangleStrip, verts, 0, 1); }

    Read the article

  • Visitor-pattern vs inheritance for rendering

    - by akaltar
    I have a game engine that currently uses inheritance to provide a generic interface to do rendering: class renderable { public: void render(); }; Each class calls the gl_* functions itself, this makes the code hard to optimize and hard to implement something like setting the quality of rendering: class sphere : public renderable { public: void render() { glDrawElements(...); } }; I was thinking about implementing a system where I would create a Renderer class that would render my objects: class sphere { void render( renderer* r ) { r->renderme( *this ); } }; class renderer { renderme( sphere& sphere ) { // magically get render resources here // magically render a sphere here } }; My main problem is where should I store the VBOs and where should I Create them when using this method? Should I even use this approach or stick to the current one, perhaps something else? PS: I already asked this question on SO but got no proper answers.

    Read the article

  • Particles are not moving correctly [closed]

    - by cr33p
    I want to make a particle explosion, after something gets destroyed, but somehow only one line of mixed colors show up on the screen. Here's the header: http://pastebin.com/JW5bPLj2 Here's the source: http://pastebin.com/KHmFqytD I don't get what's wrong, as it's nearly the same as in "Programming Linux Games" Can somebody help me fix that? PS: "Uint32 delta" is needed to update the pixels based on time. PSS: Maybe I should add that it's programmed in C and includes SDL. EDIT: Found the problem. It was the "drawParticles" function. The problem was, that I passed a double to "offset" (as particles[i].x, etc are all doubles). So I ended up with values like ~MAX_INT because I didn't cast the doubles properly to ints.

    Read the article

  • Sprite rotation

    - by Kipras
    I'm using OpenGL and people suggest using glRotate for sprite rotation, but I find that strange. My problem with it is that it rotates the whole matrix, which sort of screws up all my collision detection and so on and so forth. Imagine I had a sprite at position (100, 100) and in position (100, 200) is an obstacle and the sprite's facing it. I rotate the sprite away from the obstacle and when move upwards my y axis, even though the projection shows like it's going away from the obstacle, the sprite will intersect it. So I don't see another way of a rotating a sprite and not screwing up all collision detection other than doing mathematical operations on the image itself. Am I right or am I missing something?

    Read the article

  • 3d Collision Handling

    - by TobSpr
    I have trouble while detecting collisions on my 3D-Game. I have set-up Rays, to detect collisions (Screenshot) and my main-rountine already analyzes them. But now there's the question what to do with that. One possibility would be, to move the player back to the last position, but that's dirty, and does not work if the player can walk in multiple directions (e.g. if the player runs along a wall). My question is, what to do with the collision data / or in which direction, by which amount move the player? I'm sure there is an algorithm for that (as for almost all is).

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • Is it possible to programmatically prevent a game from pausing when its window loses focus?

    - by user836045
    I'm playing Skyrim in windowed mode and I am trying to create a bot for this game for personal use. I would like to have the bot play the game in the background, while I do other things, the only problem is that the game window pauses when it loses focus. Is there a way to make the Skyrim process think that it still has the focus, so it continues to run while I do something else on another window? I'm not a windows programming expert but would this be possible if I could somehow intercept the message that says unfocused or minimized to the process, and thus let the process think its still focused? I think Skyrim uses directx, so is it possible to come up with a solution from that end?

    Read the article

  • Good overview tool / board for visualizing Subversion branch acitivity?

    - by Sam
    Our team is sometimes finding it a bit confusing and time-consuming to figure out which subversion operations have been perrformed on our different branches in Subversion. Example, when has the Development branch last been merged into the Trunk? When was this particular Tag created, based on what branch etc etc. All of this information can of course be extracted from the Subversion Log, but thats always a manual, time-consuming and error-prone process. Simplest solution seems to be a simple whiteboard with a visualization of all the different branches/tags/trunk in Subversion and people drawing on it, whenever something significant happens. But we're not averse to finding some kind of a digital solution as well, stored centrally. Obviously both systems depend on people actually maintaining the model, but you'll always more or less have that. What do you use as best practice for keeping a clear view on all Subversion operations in the current Sprint (or beyond)?

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >