Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 501/1042 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Is it possible to use 3G internet for a TCP/IP game server?

    - by Amit Ofer
    I'm working on a turned based multiplayer android game with a friend. I started working on the game server and client using socket programming. I found a few tutorials on how to implement a basic chat on android and I started extending that example to suit my needs. Basically the game is really simple and the communication only include sending a few string from the client to the server every turn and sending the calculated scores back to all the clients after each turn. the idea is that one of the players creates the game and thus initialize the server, and each player connects to this client using ip. I tried this solution and it seems to work great when all the players are using the same wifi connection or by using router port forwarding. The problem is when trying to use 3G internet for the server, I guess the problem is that 3G ip address isn't global and you can't use port forwarding there, correct me if I'm wrong here. Is there a way to overcome this issue? or the only solution is to limit my game to wifi only or think of a different solution than the standard socket programming solution? I.E web server etc. what do you think would be the best approach here? Thanks.

    Read the article

  • Confusion with floats converted into ints during collision detection

    - by TheBroodian
    So in designing a 2D platformer, I decided that I should be using a Vector2 to track the world location of my world objects to retain some sub-pixel precision for slow-moving objects and other such subtle nuances, yet representing their bodies with Rectangles, because as far as collision detection and resolution is concerned, I don't need sub-pixel precision. I thought that the following line of thought would work smoothly... Vector2 wrldLocation; Point WorldLocation; Rectangle collisionRectangle; public void Update(GameTime gameTime) { Vector2 moveAmount = velocity * (float)gameTime.ElapsedGameTime.TotalSeconds wrldLocation += moveAmount; WorldLocation = new Point((int)wrldLocation.X, (int)wrldLocation.Y); collisionRectangle = new Rectangle(WorldLocation.X, WorldLocation.Y, genericWidth, genericHeight); } and I guess in theory it sort of works, until I try to use it in conjunction with my collision detection, which works by using Rectangle.Offset() to project where collisionRectangle would supposedly end up after applying moveAmount to it, and if a collision is found, finding the intersection and subtracting the difference between the two intersecting sides to the given moveAmount, which would theoretically give a corrected moveAmount to apply to the object's world location that would prevent it from passing through walls and such. The issue here is that Rectangle.Offset() only accepts ints, and so I'm not really receiving an accurate adjustment to moveAmount for a Vector2. If I leave out wrldLocation from my previous example, and just use WorldLocation to keep track of my object's location, everything works smoothly, but then obviously if my object is being given velocities less than 1 pixel per update, then the velocity value may as well be 0, which I feel further down the line I may regret. Does anybody have any suggestions about how I might go about resolving this?

    Read the article

  • Could someone explain in detail simplex /or perlin noise?

    - by Ryan Szemplinski
    I am really interested in perlin/simplex noise but I am having a difficult time understanding it. I am not very good at math but I am willing to learn because it interests me greatly. If someone is willing to dedicate there time into this I would be immensely appreciative of this. To be more concise, an explanation of functions and some calculation inside the functions would be nice to understand. Thanks in advance!

    Read the article

  • How do I separate model positions from view positions in MVC?

    - by tieTYT
    Using MVC in games (as opposed to web apps) always confuses me when it comes to the view. How am I supposed to keep the model agnostic of how the view is presenting things? I always end up giving the Model a position that holds x and y but invariably, these values end up being in units of pixels and that feels wrong. I can see the advantage* of avoiding that but how am I supposed to? This idea was suggested: Don't think of it in units of pixels, think of them in arbitrary distance units that just happen map to pixels at a 1:1 ratio. Oh, the resolution is half of what it was? We are now taking the x/y coordinates at 50% value for screen display, and your spells casting range is still 300 units long, which now is 150 pixels. But those numbers conveniently work out. What do I do if the numbers divide in such a way that I get decimal places? Floating points are unsafe. I think allowing decimal places would eventually cause really weird bugs in my game. *It'd let me write the model once and write different views depending on the device.

    Read the article

  • Do I need "cube subclasses" to represent the blocks in a Minecraft-like world?

    - by stighy
    I would like to try to develop a very simple game like Minecraft for my own education. My main problem at the moment is figuring out how to model classes that represent the world, which will be made of blocks of various types (such as dirt, stone and sand). I am thinking of creating the following class structure: Cube (with proprerties like color, strength, flammable, gravity) with subclasses: Dirt Stone Sand et cetera My question is, do I need the Cube subclasses or a single class Cube sufficient?

    Read the article

  • Detecting tile with height in isometric game

    - by Carlos Navarro
    I'm trying to create an isometric tile-based game (for iPhone) and I'm having trouble with height in tiles. What I currently do (without heights) is apply some mathematic transformations to my 2D-matrix (which represent the tiles) so that I know where in the screen (x,y) should I place the isometric tile. Then, when the user clicks somewhere in the screen, I take that values and pass them through a function (kind of f^-1) to get which tile it belongs to. This works perfectly. My problem is: imagine that I want some tiles to have a different height from others. In order to draw the tile itself its pretty simple, since the z-coordinate has no transformation in the isometric approach used in games (z'=z). BUT what if I want to calculate the tile coordinate (defined by X-tile and Y-tile) from the touch coordinates (x,y)? Any guess?

    Read the article

  • Android 2D terrain scrolling

    - by Nikola Ninkovic
    I want to make infinite 2D terrain based on my algorithm.Then I want to move it along Y axis (to the left) This is how I did it : public class Terrain { Queue<Integer> _bottom; Paint _paint; Bitmap _texture; Point _screen; int _numberOfColumns = 100; int _columnWidth = 20; public Terrain(int screenWidth, int screenHeight, Bitmap texture) { _bottom = new LinkedList<Integer>(); _screen = new Point(screenWidth, screenHeight); _numberOfColumns = screenWidth / 6; _columnWidth = screenWidth / _numberOfColumns; for(int i=0;i<=_numberOfColumns;i++) { // Generate terrain point and put it into _bottom queue } _paint = new Paint(); _paint.setStyle(Paint.Style.FILL); _paint.setShader(new BitmapShader(texture, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT)); } public void update() { _bottom.remove(); // Algorithm calculates next point _bottom.add(nextPoint); } public void draw(Canvas canvas) { Iterator<Integer> i = _bottom.iterator(); int counter = 0; Path path = new Path(); path.moveTo(0, _screen.y); while (i.hasNext()) { path.lineTo(counter, _screen.y-i.next()); counter += _columnWidth; } path.lineTo(_screen.x, _screen.y); path.lineTo(0, _screen.y); canvas.drawPath(path2, _paint); } } The problem is that the game is too 'fast', so I tried with pausing thread with Thread.sleep(50); in run() method of my game thread but then it looks too torn. Well, is there any way to slow down drawing of my terrain ?

    Read the article

  • Android - Efficient way to draw tiles in OpenGL ES

    - by Maecky
    Hi, I am trying to write efficient code to render a tile based map in android. I load for each tile the corresponding bitmap (just one time) and then create the according tiles. I have designed a class to do this: public class VertexQuad { private float[] mCoordArr; private float[] mColArr; private float[] mTexCoordArr; private int mTextureName; private static short mCounter = 0; private short mIndex; As you can see, each tile has it's x,y location, a color array, texture coordinates and a texture name. Now, I want to render all my created tiles. To reduce the openGL api calls (I read somewhere that the state changes are costly and therefore I want to keep them to a minimum), I first want to hand ALL the coordinate-arrays, color-arrays and texture-coordinates over to OpenGL. After that I run two for loops. The first one iterates over the textures and binds the texture. The second for loop iterates over all Tiles and puts all tiles with the corresponding texture into an IndexBuffer. After the second for loop has finished, I call gl.gl_drawElements() whith the corresponding index buffer, to draw all tiles with the texture associated. For the next texture I do the same again. Now I run into some problems: Allocating and filling the FloatBuffers at the start of each rendering cycle costs very much time. I just run a test, where i wanted to put 400 coordinates into a FloatBuffer which took me about 200ms. My questions now are: Is there a better way, handling the coordinate and color structures? How is this correctly done, this is obviously not the optimal way? ;) thanks in advance, regards Markus

    Read the article

  • techniques for displaying vehicle damage

    - by norca
    I wonder how I can displaying vehicle damage. I am talking about an good way to show damage on screen. Witch kind of model are common in games and what are the benefits of them. What is state of the art? One way i can imagine is to save a set of textures (normal/color/lightmaps, etc) to a state of the car (normal, damage, burnt out) and switch or blending them. But is this really good without changing the model? Another way i was thinking about is preparing animations for different locations on my car, something like damage on the front, on the leftside/rightside or on the back. And start blending the specific animation. But is this working with good textures? Whats about physik engines? Is it usefull to use it for deforming vertexdata? i think losing parts of my car (doors, sirens, weapons) can looks really nice. my game is a kind of rts in a top down view. vehicles are not the really most importend units (its no racing game), but i have quite a lot in. thx for help

    Read the article

  • Problem with alleg42.dll / program crashes / Allegro & Codeblocks

    - by user24152
    I'm having a serious problem with allegro. The program should display random pixels on the screen and when I build and run it I get the following error message: Below is the full code of my program: #include <stdio.h> #include <stdlib.h> #include <time.h> #include "allegro.h" #define Text_Color_Red makecol(255,0,0) int main() { int ret; int color_depth = 32; int x; int y; int red; int green; int blue; int color; //init allegro allegro_init(); //install keyboard install_keyboard(); //set color depth to 32 bits set_color_depth(color_depth); //init random seed srand(time(NULL)); //init video mode to 640 x 480 ret = set_gfx_mode(GFX_AUTODETECT_WINDOWED,640,480,0,0); if(ret !=0) { allegro_message(allegro_error); return 1; } //Display string textprintf(screen,font,0,0,10,0,Text_Color_Red,"Screen Resolution is: %dx%d -- Press ESC to quit !",SCREEN_W,SCREEN_H); //display pixels until ESC key is pressed //wait for keypress while(!key[KEY_ESC]) { //set a random location x = 10 + rand() % (SCREEN_W-20); y = 10 + rand() % (SCREEN_H-20); //set a random color red = rand() % 255; green = rand() % 255; blue = rand() % 255; color = makecol(red,green,blue); //draw the pixel putpixel(screen, x, y, color); } //quit allegro allegro_exit(); } END_OF_MAIN() Error message: AllegroPixels1.exe has encountered a problem and needs to close. We are sorry for the inconvenience. Error signature: AppName: allegropixels1.exe AppVer: 0.0.0.0 ModName: alleg42.dll ModVer: 4.2.3.0 Offset: 0006c05c I am using Windows XP inside a virtual machine under Parallels 7.0

    Read the article

  • Open GL polygons not displaying

    - by Darestium
    I have tried to follow nehe's opengl tutorial lesson 2. I use sfml for my window creation. The problem I have is that both the triangle and the quad don't show up on the screen: #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app, const sf::Input &input); void renderCube(sf::Window *app, sf::Clock *clock); void renderGlScene(sf::Window *app); void init(); int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 2"); app.UseVerticalSync(false); init(); while (app.IsOpened()) { processEvents(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable z-buffer and read and write glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen and the depth buffer glLoadIdentity(); // Reset the view glTranslatef(-1.5f, 0.0f, -6.0f); // Move Left 1.5 units and into the screen 6.0 glBegin(GL_TRIANGLES); glVertex3f( 0.0f, 1.0f, 0.0f); // Top glVertex3f(-1.0,-1.0f, 0.0f); // Bottom Left glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right glEnd(); glTranslatef(3.0f, 0.0f, 0.0f); glBegin(GL_QUADS); // Draw a quad glVertex3f(-1.0f, 1.0f, 0.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glEnd(); } I would greatly appreciate it if someone could help me resolve my issue.

    Read the article

  • Why does Unity in 2d mode employ scaling and the default othographic size the way it does?

    - by Neophyte
    I previously used SFML, XNA, Monogame, etc to create 2d games, where if I display a 100px sprite on the screen, it will take up 100px. If I use 128px tiles to create a background, the first tile will be at (0,0) while the second will be at (129,0). Unity on the other hand, has its own odd unit system, scaling on all transforms, pixel-to-units, othographic size, etc etc. So my question is two-fold, namely: Why does Unity have this system by default for 2d? Is it for mobile dev? Is there a big benefit I'm not seeing? How can I setup my environment, so that if I have a 128x128 sprite in Photoshop, it displays as a 128x128 sprite in Unity when I run my game? Note that I am targeting desktop exclusively.

    Read the article

  • XNA - Am I screwing up the LoadContent for Texture2D?

    - by Bombcode
    I've read forum threads and questions and I done just about everything. I need to know what am I screwing up here. Here's the code in the constructor. Content.RootDirectory = "GameStateContent"; //Content.RootDirectory = "Content"; And this is in the LoadContent method menu = this.Content.Load<Texture2D>("mainmenu"); And here's the image screen shot of the folder structure. http://i.imgur.com/HnndE.png Any helps on this? Thanks.

    Read the article

  • Separate collision mesh model?

    - by Menno Gouw
    I want to have another go at 3D within XNA. What I have seen from some other games that they just have a separate very low poly model "cage" around the environment model. However I can not find any reference to this. I have not that much experience with XNA 3D either. Is it possible to have this cage within each of my environmental models already? Lets just say I call the mesh within the .FBX wall and col_wall. How would I call to these different meshes within XNA? The player would just have a tight collision cube around. To make it a bit more efficient I will be making divide the map up by cubes and only calculate collision if the player is in it. Question two: I can't find anywhere to do cube vs mesh collision. Is there a method for this? Or perhaps it is possible to build my collision cage out of cubes in the 3D app and on loading of the models in XNA replace them directly by cubes? So I could just do box to box collision which should be very cheap and still give the player the ability to move over ledges on the static models.

    Read the article

  • OOP implementation of BUFFS and Stats. Suggestion

    - by Mattia Manzo Manzati
    I am developing an MMORPG server using NodeJS. I am not sure how to implement Buffs, i mean, equipped objects or used skills have effects on the Player() which has many Stats(), some of them have a max cap... Effects can change the Stat value, increasing or decreasing it by a value, a percentage or completly rewrite the value of the stat. After a while I have decided to create a base class for buffs, which can be hidden (if they are casted from an equipped object) or shown if they came from an ability (Spell). Anyway I need suggestion how to implement it, use an array for all active buffs for a stat and have a function calculate the value of the stat affected by buffs each time I need the value of the stat or...? Other more OOP's ways to do it? I have read this What's a way to implement a flexible buff/debuff system? but this implements only a percentage system, which buffs can only say "+10%, +20%, etc...", but I would love to have an hybrid system, which can have percentage values or static values (like WoW does), and using modifiers it's hard to implement, because modifiers refers to the current value of stat :/ Thanks for suggestions :)

    Read the article

  • Drag camera/view in a 3D world

    - by Dono
    I'm trying to make a Draggable view in a 3D world. Currently, I've made it using mouse position on the screen, but, when I move the distance traveled by my mouse is not equal to the distance traveled in the 3D world. So, I've tried to do that : Compute a ray from mouse position to 3D world. Calculate intersection with the ground. Check intersection difference old position <- new position. Translate camera with the difference. I've got a problem with this method: The ray is computed with the current camera's position I move the camera I compute the new ray with new camera position. The difference between old ray and new ray is now invalid. So, graphically my camera don't stop to move to previous/new position everytime. How can I do a draggable camera with another solution ? Thanks!

    Read the article

  • It's possible to fulfill the social necessity of a human being through a social game in 3D like IMVU?

    - by Totty
    (I'm not advertising nor promoting this game, as it's just an example of my experience and I would like to have your opinion about the matter if possible) I've been started researching "things" about games and I've decided to begin to play IMVU as a friend of mine said it's cool. At first it seemed just another 3d social game, not so cool.. But I've "tried to like" and after 1 day I can say I'm addicted to it! Yes; I will explain better: About the game: You can go in chat-rooms, move to positions. Some positions are like sitting in a sofa, floor, dancing alone or with a partner, kissing and more in this way. In the free version of the game there is no nudity. You can even listen to music, view youtube... The 3d graphics are quite low end, so it's not as real as the paid PC games of today. About my experience: At first I was going with my friend in chat-rooms, they seemed very nice. There were people talking about general stuff, quite like in a real life. Well, I begin to know some girls (yes, virtual girls commanded by a real girl, I hope!). Things happened: Some girls are just crazy, not like in real life, they make out in before even talking; Other girls you can speak a little bit, then they add you to their friend-list. Sometimes they invite to their virtual places. Some girls have really IMVU boyfriends only (but not in reality) and most of them don't even make up in the game, so it's really a level of commitment involved here! But from what my friend told they last for him, at least, about 3 days... Some others have real and IMVU boyfriends that are the same. Until now I haven't find a girl with different boyfriend in the IMVU and reality. Nor multiple boyfriends. There are rooms where the same people find each selves every day and speak about general stuff, relationships and so on... They are nice with you, they "feel" you and show careness. This is what amazes me, they treat you like a real human being and as being their friend in the real world. (of course it's not always like this) There are jealous girls too and competitiveness between females lol, I know you loled! This is kind of social. So today I closed my door in my room and I've played it all day long and guess what, I didn't feel a need to stay with a real person at all. Normally, If I would stay a full day alone I would get quite crazy... So the question is: It's just me that seemed to be able to fulfill my social needs or there is something more? thanks for your precious time for reading my full question,

    Read the article

  • Path Finding for an Arena based map in 3D using NavMesh

    - by Happybirthday
    I have a 3D arena map (consider a small island surrounded by water on all sides) for a multiplayer Tank fight game. The moveable areas are marked using a Navigation Mesh made by the Arena designer. My question is what would be the best way for navigation in such an environment ? Specially considering the case when there is a Bridge at the center of the arena and you could walk under it or even above it ? If suppose the enemy is standing at the top of the Bridge and my AI is at one of the edges of the map ? How can it know whether the enemy is above or below the bridge and how can it navigate till it ?

    Read the article

  • Sensor based vs. AABB based collision

    - by Hillel
    I'm trying to write a simple collision system, which will probably be primarily used for 2D platformers, and I've been planning out an AABB system for a few weeks now, which will work seamlessly with my grid data structure optimization. I picked AABB because I want a simple system, but I also want it to be perfect. Now, I've been hearing a lot lately about a different method to handle collision, using sensors, which are placed in the important parts of the entity. I understand it's a good way to handle slopes, better than AABB collision. The thing is, I can't find a basic explanation of how it works, let alone a comparison of it and the AABB method. If someone could explain it to me, or point me to a good tutorial, I'd very much appreciate it, and also a comparison of the advantages and disadvantages of the two techniques would be nice.

    Read the article

  • Possible to draw a select portion of a render target? (in XNA)

    - by TheBroodian
    I'm going to try to do this in reverse fashion and skip straight to the punch line, and then give the back story afterward: Is it possible to, after drawing a scene to a RenderTarget2D, only draw a select portion of the RenderTarget2D, if I don't want the entire thing? I'm using xTile to manage world data in my game (it's a great piece of work, colinvella [xTile's author] has made an amazing product), and for the most part it works great. xTile supports parallax effects in its layers to add some wonderful depth to 2d scenes, which was great, until I implemented a dynamic split-screen system into my game. Wanted to make a co-op game that wouldn't require players to be in close proximity to each other, so I made it so that if the players separate too far apart, the singular full-screen viewport 'snaps-apart', and is replaced by two split-screen viewports, which then smoothly transition to their respective player targets. The effect is pretty smooth aside from the part where the parallax backgrounds become skewed once the viewports split, because xTile's ratio for handling parallax effects is dependent upon viewport size. This is unfortunate, because the effect would otherwise be really snazzy, but the backgrounds become pretty heavily affected when the game goes from single-viewport to multi-viewport. So, Colinvella suggests using rendertargets to record the scene at full viewport size, and then only drawing a portion of it. But as far as I can tell, that isn't even possible? That being said, I've never even used render targets before, so I'm still learning, hence the question here.

    Read the article

  • LibGDX drawing map using tiles without space

    - by Enayat Muradi
    I am making a board game. To draw the map on the board I use different tiles. On some screen the map looks good but on some other screens there is a space between the tiles. How can I do so there won't be any space between the tiles? I am designing my game with the size 480x800. To fit other screens I stretch it. My tiles looks like this: I draw the map using a for loop to draw the tile in different (x,y) position on screen. Here is what I mean with space between tiles: Screen with 240x400 Screen with 360x600, here there is no spacing between tiles. I use camera and the screen to draw I don't use stage. I have also tried to use Viewport but I get the same results. cam = new OrthographicCamera();cam.setToOrtho(true, gameWidth, gameHeight); batcher = new SpriteBatch(); batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); How can I do to solve the problem?

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • Read an object from compressed file generated from ActionScript 3

    - by Last Chance
    I have made a simple game Map Editor and I want to save a array that contain map tile info to a file, as below: var arr:Array = [.....2d tile info in it...]; var ba:ByteArray = new ByteArray(); ba.writeObject(arr); ba.compress(); var file:File = new File(); file.save(ba); I had successfully saved a compressed object to a file. Now the problem is my server side need to read this file and decompress the array out from the file, then convert it to a Python list. Is that possible?

    Read the article

  • How can I make a 32 bit render target with a 16 bit alpha channel in DirectX?

    - by J Junker
    I want to create a render target that is 32-bit, with 16 bits each for alpha and luminance. The closest surface formats I can find in the DirectX SDK are: D3DFMT_A8L8 // 16-bit using 8 bits each for alpha and luminance. D3DFMT_G16R16F // 32-bit float format using 16 bits for the red channel and 16 bits for the green channel. But I don't think either of these will work, since D3DFMT_A8L8 doesn't have the precision and D3DFMT_G16R16F doesn't have an alpha channel (I need a separate blend state for alpha). How can I create a render target that allows a separate blend state for luminance and alpha, with 16 bit precision on each channel, that doesn't exceed 32 bits per pixel?

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >