Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 530/1016 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • How to prioritize related game entity components?

    - by Paul Manta
    I want to make a game where you have to run over a bunch of zombies with your car. When moving around, the zombies have a few things to take into consideration: When there's no player around they might just roam about randomly. And even when some other component dictates a specific direction, they should wobble to the left and right randomly (like drunk people). This implies a small, random, deviation in their movement. They should avoid static obstacles. When they see they are headed towards a wall, they should reorient themselves. They should avoid the car. They should try to predict where the car will be based on its velocity and try to move out of the way. When they can, they should try to get near the player. All these types of decisions they have to do seem like they should be implemented in different components. But how should I manage them? How can I give different components different weights that reflect the importance of each decision (in a given situation)? I would need some other component that acts as a manager, but do you have any tips on how I should implement it? Or maybe there's a better solution?...

    Read the article

  • starting rails in test environment

    - by Brian D.
    I'm trying to load up rails in the test environment using a ruby script. I've tried googling a bit and found this recommendation: require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' This seems to load up my environment alright, but my development database is still being used. Am I doing something wrong? Here is my database.yml file... however I don't think it is the issue development: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_development pool: 5 username: root password: dev host: localhost # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_test pool: 5 username: root password: dev host: localhost production: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_production pool: 5 username: root password: dev host: localhost I can't use ruby script/server -e test because I'm trying to run ruby code after I load rails. More specifically what I'm trying to do is: run a .sql database script, load up rails and then run automated tests. Everything seems to be working fine, but for whatever reason rails seems to be loading in the development environment instead of the test environment. Here is a shortened version of the code I am trying to run: system "execute mysql script here" require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' describe Blog do it "should be initialized successfully" do blog = Blog.new end end I don't need to start a server, I just need to load my rails code base (models, controllers, etc..) so I can run tests against my code. Thanks for any help.

    Read the article

  • Using OpenCl to jiggle the Pipe

    - by TOAOGG
    I've got the Idea to use OpenCL to program a simple Renderer. A clear contra is, that this approach won't benefit from the hardware as the functions on the device (I think). Would it be useful to do this in OpenCL..lets say we want to Cull as early as possible so we won't have many per vertex operations. Is it correct, that Culling is done after the Vertex-Shader? For static-vertecies who won't get effected by the shader it could be interesting to cull them before. Another idea would be an deferred renderer. So the main question is: Would it make sense to program a renderer in OpenCL (aside the effort)? The resulting picture would be drawn in OpenGL.

    Read the article

  • What file formats and conventions should I support to make my game engine artist-friendly?

    - by Avi
    I'm writing a game engine, and I want to know what I should do to make it more artist-friendly. I don't want to be too limiting in terms of what file formats I support, etc. Some specific questions: Are there specific formats artists like to model in? Does it not matter because the 3D modeler abstracts the data storage away? Is it okay if I don't support per-vertex coloration in my game engine? If I have to store a diffuse, specular, ambient, and emissive color value for each vertex, it doubles the size of vertices in the buffer. Is it reasonable to ask artists to do all these things in textures / maps? Any other tips you have about making it so that artists have to adapt their style to my specific engine as little as possible would be nice.

    Read the article

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

  • How can I render multiple windows with DirectX 9 in C++?

    - by Friso1990
    I'm trying to render multiple windows, using DirectX 9 and swap chains, but even though I create 2 windows, I only see the first one that I've created. My RendererDX9 header is this: #include <d3d9.h> #include <Windows.h> #include <vector> #include "RAT_Renderer.h" namespace RAT_ENGINE { class RAT_RendererDX9 : public RAT_Renderer { public: RAT_RendererDX9(); ~RAT_RendererDX9(); void Init(RAT_WindowManager* argWMan); void CleanUp(); void ShowWin(); private: LPDIRECT3D9 renderInterface; // Used to create the D3DDevice LPDIRECT3DDEVICE9 renderDevice; // Our rendering device LPDIRECT3DSWAPCHAIN9* swapChain; // Swapchain to make multi-window rendering possible WNDCLASSEX wc; std::vector<HWND> hwindows; void Render(int argI); }; } And my .cpp file is this: #include "RAT_RendererDX9.h" static LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ); namespace RAT_ENGINE { RAT_RendererDX9::RAT_RendererDX9() : renderInterface(NULL), renderDevice(NULL) { } RAT_RendererDX9::~RAT_RendererDX9() { } void RAT_RendererDX9::Init(RAT_WindowManager* argWMan) { wMan = argWMan; // Register the window class WNDCLASSEX windowClass = { sizeof( WNDCLASSEX ), CS_CLASSDC, MsgProc, 0, 0, GetModuleHandle( NULL ), NULL, NULL, NULL, NULL, "foo", NULL }; wc = windowClass; RegisterClassEx( &wc ); for (int i = 0; i< wMan->getWindows().size(); ++i) { HWND hWnd = CreateWindow( "foo", argWMan->getWindow(i)->getName().c_str(), WS_OVERLAPPEDWINDOW, argWMan->getWindow(i)->getX(), argWMan->getWindow(i)->getY(), argWMan->getWindow(i)->getWidth(), argWMan->getWindow(i)->getHeight(), NULL, NULL, wc.hInstance, NULL ); hwindows.push_back(hWnd); } // Create the D3D object, which is needed to create the D3DDevice. renderInterface = (LPDIRECT3D9)Direct3DCreate9( D3D_SDK_VERSION ); // Set up the structure used to create the D3DDevice. Most parameters are // zeroed out. We set Windowed to TRUE, since we want to do D3D in a // window, and then set the SwapEffect to "discard", which is the most // efficient method of presenting the back buffer to the display. And // we request a back buffer format that matches the current desktop display // format. D3DPRESENT_PARAMETERS deviceConfig; ZeroMemory( &deviceConfig, sizeof( deviceConfig ) ); deviceConfig.Windowed = TRUE; deviceConfig.SwapEffect = D3DSWAPEFFECT_DISCARD; deviceConfig.BackBufferFormat = D3DFMT_UNKNOWN; deviceConfig.BackBufferHeight = 1024; deviceConfig.BackBufferWidth = 768; deviceConfig.EnableAutoDepthStencil = TRUE; deviceConfig.AutoDepthStencilFormat = D3DFMT_D16; // Create the Direct3D device. Here we are using the default adapter (most // systems only have one, unless they have multiple graphics hardware cards // installed) and requesting the HAL (which is saying we want the hardware // device rather than a software one). Software vertex processing is // specified since we know it will work on all cards. On cards that support // hardware vertex processing, though, we would see a big performance gain // by specifying hardware vertex processing. renderInterface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwindows[0], D3DCREATE_SOFTWARE_VERTEXPROCESSING, &deviceConfig, &renderDevice ); this->swapChain = new LPDIRECT3DSWAPCHAIN9[wMan->getWindows().size()]; this->renderDevice->GetSwapChain(0, &swapChain[0]); for (int i = 0; i < wMan->getWindows().size(); ++i) { renderDevice->CreateAdditionalSwapChain(&deviceConfig, &swapChain[i]); } renderDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); // Set cullmode to counterclockwise culling to save resources renderDevice->SetRenderState(D3DRS_AMBIENT, 0xffffffff); // Turn on ambient lighting renderDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // Turn on the zbuffer } void RAT_RendererDX9::CleanUp() { renderDevice->Release(); renderInterface->Release(); } void RAT_RendererDX9::Render(int argI) { // Clear the backbuffer to a blue color renderDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 255 ), 1.0f, 0 ); LPDIRECT3DSURFACE9 backBuffer = NULL; // Set draw target this->swapChain[argI]->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backBuffer); this->renderDevice->SetRenderTarget(0, backBuffer); // Begin the scene renderDevice->BeginScene(); // End the scene renderDevice->EndScene(); swapChain[argI]->Present(NULL, NULL, hwindows[argI], NULL, 0); } void RAT_RendererDX9::ShowWin() { for (int i = 0; i < wMan->getWindows().size(); ++i) { ShowWindow( hwindows[i], SW_SHOWDEFAULT ); UpdateWindow( hwindows[i] ); // Enter the message loop MSG msg; while( GetMessage( &msg, NULL, 0, 0 ) ) { if (PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } else { Render(i); } } } } } LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { switch( msg ) { case WM_DESTROY: //CleanUp(); PostQuitMessage( 0 ); return 0; case WM_PAINT: //Render(); ValidateRect( hWnd, NULL ); return 0; } return DefWindowProc( hWnd, msg, wParam, lParam ); } I've made a sample function to make multiple windows: void RunSample1() { //Create the window manager. RAT_ENGINE::RAT_WindowManager* wMan = new RAT_ENGINE::RAT_WindowManager(); //Create the render manager. RAT_ENGINE::RAT_RenderManager* rMan = new RAT_ENGINE::RAT_RenderManager(); //Create a window. //This is currently needed to initialize the render manager and create a renderer. wMan->CreateRATWindow("Sample 1 - 1", 10, 20, 640, 480); wMan->CreateRATWindow("Sample 1 - 2", 150, 100, 480, 640); //Initialize the render manager. rMan->Init(wMan); //Show the window. rMan->getRenderer()->ShowWin(); } How do I get the multiple windows to work?

    Read the article

  • Spawning bullets on command in Box2D

    - by recharge330
    I'm making a simple bullet hell game but I can't figure out how to get my character to shoot. Lets say I have bulletBody and shipBody, how would I continually spawn bulletBodies using the shipBody coordinates. I've tried a function that uses an array of b2bodies and just assigns them the bodydef and fixture but that causes the game to crash. C++ sample code would be best but any help is appreciated. EDIT: It looks like any reference to my b2World in a function will cause the game to crash. How do I declare the bodies without using a b2World as an argument in the function.

    Read the article

  • pointers to member functions in an event dispatcher

    - by derivative
    For the past few days I've been trying to come up with a robust event handling system for the game (using a component based entity system, C++, OpenGL) I've been toying with. class EventDispatcher { typedef void (*CallbackFunction)(Event* event); typedef std::unordered_map<TypeInfo, std::list<CallbackFunction>, hash_TypeInfo > TypeCallbacksMap; EventQueue* global_queue_; TypeCallbacksMap callbacks_; ... } global_queue_ is a pointer to a wrapper EventQueue of std::queue<Event*> where Event is a pure virtual class. For every type of event I want to handle, I create a new derived class of Event, e.g. SetPositionEvent. TypeInfo is a wrapper on type_info. When I initialize my data, I bind functions to events in an unordered_map using TypeInfo(typeid(Event)) as the key that corresponds to a std::list of function pointers. When an event is dispatched, I iterate over the list calling the functions on that event. Those functions then static_cast the event pointer to the actual event type, so the event dispatcher needs to know very little. The actual functions that are being bound are functions for my component managers. For instance, SetPositionEvent would be handled by void PositionManager::HandleSetPositionEvent(Event* event) { SetPositionEvent* s_p_event = static_cast<SetPositionEvent*>(event); ... } The problem I'm running into is that to store a pointer to this function, it has to be static (or so everything leads me to believe.) In a perfect world, I want to store pointers member functions of a component manager that is defined in a script or whatever. It looks like I can store the instance of the component manager as well, but the typedef for this function is no longer simple and I can't find an example of how to do it. Is there a way to store a pointer to a member function of a class (along with a class instance, or, I guess a pointer to a class instance)? Is there an easier way to address this problem?

    Read the article

  • Help with timebased scoring algorithm

    - by Dave
    Im trying to devise an appropriate scoring system for my game. The game in essense has a finite number of tasks to complete (say 20) and the quicker you complete these task, the more points you get. I had devised a basic way of doing this using bands of time multiplied by a score for that band multiplied by the number of tasks solved within that time band i.e. (Time Band) = (Points) 1-5 sec = 15, 5-10 secs = 10, 10-20 secs = 5, 20-30 secs = 3, 40 secs onwards = 1, So for example if I did 3 tasks in the 1-5sec band i'd get 15*3=45points, if i found 10 in the 20-30sec band i'd get 3*10=30 points. Im sure there is a more mathematical way of doing this using powers of some kind but I just can't think how and hoping someone has already done something smilar.. Many thanks in advance

    Read the article

  • What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits?

    - by user12365
    I have server objects that have corresponding client objects. The data to be kept in sync is inside the server object's key/value dictionary. To keep the client objects in sync with the sever objects, I want the server to send the key/value dictionary every frame for each object. What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits? Bonus constraint 1: For each type of object, the values of some keys change more often than others. Bonus constraint 2: Memory usage on the server side is relatively expensive.

    Read the article

  • What is the standard way of delivering HTML5 games to portals and such?

    - by Bane
    Let me explain what I mean by "standard way of delivering"... Think about Flash games sites. Flash games can be delivered as a single file, either hosted by the site, or, I guess, provided by someone else. HTML5 games, on the other hand, don't have something so standard. Usually, they have their own page, and portals just link to that page. I think that it greatly hinders the purpose of that portal, because, well, you want people to stay on your site and look for other games. Now, I think that a some kind of iframe way of delivering games would help solve this problem greatly. I saw some games doing that, and they were often included on tutorial sites to show a live example, which is obviously a great thing. So, is there a standard at all? Any suggestions? Can you create a game that just preloads itself in an iframe (I heard something about a "single document" or something)?

    Read the article

  • Why am I seeing streak artifacts on the cube map I'm rendering?

    - by BobDole
    I'm getting strange streaks on my cube map when rendering to it. He is my code that is being called each frame: void drawCubeMap(void) { int face; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBindFramebuffer(GL_FRAMEBUFFER, fbo); //glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapTexture); //glClearColor(1.0f, 1.0f, 1.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glViewport(0,0,sizeT, sizeT); for (face = 0; face < 6; face++) { glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, cubeMapTexture, 0); drawSpheres(); } glBindFramebuffer(GL_FRAMEBUFFER, 0); glBindTexture(GL_TEXTURE_2D, 0); glViewport(0,0,900, 900); } Any idea what it might be? The streaking occurs when I'm rotating the spheres around the main sphere.

    Read the article

  • How to use GetActiveUniform (in SharpGL)?

    - by frankie
    Generally, guesting is in header. I cannot understand how to use GetActiveUniform function. public void GetActiveUniform(uint program, uint index, int bufSize, int[] length, int[] size, uint[] type, string name); My attempt looks like this (everything is compiled and linked): var uniformSize = new int[1]; var unifromLength = new int[1]; var uniformType = new uint[1]; var uniformName = ""; Gl.GetActiveUniform(Id, index, uniformNameMaxLength[0], unifromLength, uniformSize, uniformType, uniformName); After call I get proper uniformSize, length and type, but not name.

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Android Activity access Unity Classes

    - by Anomaly
    I have made my own C# classes in Unity, is there any way I can access these classes from the Android Activity that starts the UnityPlayer? Example: I have a C# class called testClass in Unity: class testClass{ public static string myString="test string"; } From the Android activity in Java I want to access that class: string str=testClass.myString; Is this possible? If so, how? Or is there some other way to do this? In the end I basically want to communicate between my Android activity and the UnityPlayer object. Thanks in advance. EDIT: Ok so I looked at building Android plugins for Unity but this wasn't satisfactory to me. I ended up building a socket client-server interface in Unity with C# and another one in Java for the Android app: So Unity listens on port X and broadcasts on port Y The Android activity listens on port Y and broadcasts on port X This is necessary as both interfaces are running on the same host. So that's how I solved my problem, but I'm open for any suggestions if anyone knows a better way of communicating between the Unityplayer and your app.

    Read the article

  • LibGDX - Textures rendering at wrong position

    - by ACluelessGuy
    Update 2: Let me further explain my problem since I think that i didn't make it clear enough: The Y-coordinates on the bottom of my screen should be 0. Instead it is the height of my screen. That means the "higher" i touch/click the screen the less my y-coordinate gets. Above that the origin is not inside my screen, atleast not the 0 y-coordinate. Original post: I'm currently developing a tower defence game for fun by using LibGDX. There are places on my map where the player is or is not allowed to put towers on. So I created different ArrayLists holding rectangles representing a tile on my map. (towerPositions) for(int i = 0; i < map.getLayers().getCount(); i++) { curLay = (TiledMapTileLayer) map.getLayers().get(i); //For all Cells of current Layer for(int k = 0; k < curLay.getWidth(); k++) { for(int j = 0; j < curLay.getHeight(); j++) { curCell = curLay.getCell(k, j); //If there is a actual cell if(curCell != null) { tileWidth = curLay.getTileWidth(); tileHeight = curLay.getTileHeight(); xTileKoord = tileWidth*k; yTileKoord = tileHeight*j; switch(curLay.getName()) { //If layer named "TowersAllowed" picked case "TowersAllowed": towerPositions.add(new Rectangle(xTileKoord, yTileKoord, tileWidth, tileHeight)); // ... AND SO ON If the player clicks on a "allowed" field later on he has the opportunity to build a tower of his coice via a menu. Now here is the problem: The towers render, but they render at wrong position. (They appear really random on the map, no certain pattern for me) for(Rectangle curRect : towerPositions) { if(curRect.contains(xCoord, yCoord)) { //Using a certain tower in this example (left the menu out if(gameControl.createTower("towerXY")) { //RenderObject is just a class holding the Texture and x/y coordinates renderList.add(new RenderObject(new Texture(Gdx.files.internal("TowerXY.png")), curRect.x, curRect.y)); } } } Later on i render it: game.batch.begin(); for(int i = 0; i < renderList.size() ; i++) { game.batch.draw(renderList.get(i).myTexture, renderList.get(i).x, renderList.get(i).y); } game.batch.end(); regards

    Read the article

  • What is the most efficient way to add and remove Slick2D sprites?

    - by kirchhoff
    I'm making a game in Java with Slick2D and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advise: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } bullet++; } I created the method resetLocation in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Which Kinect package for PC takes care of motion tracking too?

    - by Extrakun
    I am aware that there are opensource drivers for interfacing Kinect with the PC. My question is - the drivers at OpenKinect seems to provide only the images and depth data (from the reading of their wiki and API). It seems that you need to provide your own imaging solution. My question is - is there any all-in-one package, with samples/sources that not only grab images from Kinect, but also do the imaging/motion detection for you?

    Read the article

  • Tutorial on OpenGL texture formats

    - by Cyan
    Looking at the documentation glGetTexImage(), one can see that there are plenty of available texture formats. GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, GL_TEXTURE_1D_ARRAY, GL_TEXTURE_2D_ARRAY, GL_TEXTURE_RECTANGLE, GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z I've only used GL_TEXTURE_2D for the time being. Is there any place / documentation where one can learn about these other formats ? PS : and yes, of course, i've googled for it, results are pretty poor

    Read the article

  • What is better for the overall performance and feel of the game: one setInterval performing all the work, or many of them doing individual tasks?

    - by Bane
    This question is, I suppose, not limited to Javascript, but it is the language I use to create my game, so I'll use it as an example. For now, I have structured my HTML5 game like this: var fps = 60; var game = new Game(); setInterval(game.update, 1000/fps); And game.update looks like this: this.update = function() { this.parseInput(); this.logic(); this.physics(); this.draw(); } This seems a bit inefficient, maybe I don't need to do all of those things at once. An obvious alternative would be to have more intervals performing individual tasks, but is it worth it? var fps = 60; var game = new Game(); setInterval(game.draw, 1000/fps); setInterval(game.physics, 1000/a); //where "a" is some constant, performing the same function as "fps" ... With which approach should I go and why? Is there a better alternative? Also, in case the second approach is the best, how frequently should I perform the tasks?

    Read the article

  • Is it a good idea to make a game for one aspect ratio and arbitrary screen resolution?

    - by Mimars
    After several very small games I have decided to make something more standalone (2D) and playable. However, I have met the problem of every game that is going to be played in more screen resolutions. Basically, after some research I see that there are several solutions. This seems to be the simplest one: Let's say I define a constant aspect ratio for the game (16:9) and the whole game will be created for a resolution 1680 x 1050. The game will be rendered in this resolution and then I will be able to scale the render to match the player's display resolution. Therefore the game might be playable on almost any resolution, while it would keep the aspect ratio. So, if the game was run on 4:3 display, the top and the bottom of the display would be filled with black color. It seems easy, but my question is - Is this a good approach for a simple game? The game will be simple, but I want to maintain high quality.

    Read the article

  • How to rotate a group of objects around a common center?

    - by user1662292
    I've made a model in 3D Studio Max 9. It consists of a variety of cubes, clyinders etc. In XNA I've imported the model okay and it shows correctly. However, when I apply rotation, each component in the model rotates around it's own centre. I want the model to rotate as a single unit. I've linked the components in 3D Max and they rotate as I want in Max. protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); model = Content.Load<Model>("Models/Alien1"); } protected override void Update(GameTime gameTime) { camera.Update(1f, new Vector3(), graphics.GraphicsDevice.Viewport.AspectRatio); rotation += 0.1f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(transforms); Matrix worldMatrix = Matrix.Identity; Matrix rotationYMatrix = Matrix.CreateRotationY(rotation); Matrix translateMatrix = Matrix.CreateTranslation(location); worldMatrix = rotationYMatrix * translateMatrix; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = worldMatrix * transforms[mesh.ParentBone.Index]; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; } mesh.Draw(); } base.Draw(gameTime); } More Info: Rotating the object via it's properties works fine so I'm guessing there's something up with the code rather than with the object itself. Translating the object also causes the objects to get moved independently of each other rather than as a single model and each piece becomes spread around the scene. The model is in .X format.

    Read the article

  • How to pause and resume a game in XNA using the same key?

    - by user13095
    I'm attempting to implement a really simple game state system, this is my first game - trying to make a Tetris clone. I'd consider myself a novice programmer at best. I've been testing it out by drawing different textures to the screen depending on the current state. The 'Not Playing' state seems to work fine, I press Space and it changes to 'Playing', but when I press 'P' to pause or resume the game nothing happens. I tried checking current and previous keyboard states thinking it was happening to fast for me to see, but again nothing seemed to happen. If I change either the pause or resume, so they're both different, it works as intended. I'm clearly missing something obvious, or completely lacking some know-how in regards to how update and/or the keyboard states work. Here's what I have in my Update method at the moment: protected override void Update(GameTime gameTime) { KeyboardState CurrentKeyboardState = Keyboard.GetState(); // Allows the game to exit if (CurrentKeyboardState.IsKeyDown(Keys.Escape)) this.Exit(); // TODO: Add your update logic here if (CurrentGameState == GameStates.NotPlaying) { if (CurrentKeyboardState.IsKeyDown(Keys.Space)) CurrentGameState = GameStates.Playing; } if (CurrentGameState == GameStates.Playing) { if (CurrentKeyboardState.IsKeyDown(Keys.P)) CurrentGameState = GameStates.Paused; } if (CurrentGameState == GameStates.Paused) { if (CurrentKeyboardState.IsKeyDown(Keys.P)) CurrentGameState = GameStates.Playing; } base.Update(gameTime); }

    Read the article

  • Should iOS games use a Timer?

    - by ????
    No matter what frameworks we use -- Core Graphics, Cocos2D, OpenGL ES -- to write games, should a timer be used (for games that has animation even when a user doesn't do any input, such as after firing a missile and waiting to see if the UFO is hit)? I read that NSTimer might not get fired until after scheduled time (interval), and CADisplayLink can delay and get fired at a later time as well, only that it tells you how late it is so you can move the object more, so it can make the object look like it skipped frame. Must we use a Timer? And if so, what is the best one to use?

    Read the article

  • How do I draw a texture to a MSTerrain object?

    - by Brad
    I'm using Farseer to make a game in XNA and I can't seem to figure this out. I've decided to use MSTerrain for making my game's terrain because I wanted destructible terrain and MSTerrain seemed like the best bet. Unfortunately, I'm stumped on how to actually show the terrain. When I generate the terrain it's visible in debug view, but MSTerrain does not have a Draw method, so I'm wondering how it is supposed to be drawn to the screen? Is it worth pursuing? I'm starting to think that MSTerrain is more trouble than it's worth, is there another better way to do this with bodies?

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >