Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 531/1016 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

  • Unable to create suitable graphics device?

    - by kraze
    I've been following the Eye of the Dragon tutorial, which is basicly a guide to making a 2D RPG game, obviously. I recently finished the tutorial about making pop up screens in the menu and changed the screen to load as a full screen. Whenever I try and load the game it just goes black and my mouse sits there. I cannot change out of it other then with CtrlAltDel. Once i do that it says No suitable graphics card found, unable to create graphics device. I read somewhere about XNA not allowing more then one screen when any one of them is full screen. but it wasnt very informative. Anyone have any ideas whats going on and/or how to fix this? Just incase if this helps this is the code for the graphics device: public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 900; graphics.PreferredBackBufferHeight = 768; graphics.IsFullScreen = true; this.Window.Title = "Eyes of the Dragon"; Content.RootDirectory = "Content"; }

    Read the article

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • SDL_DisplayFormat works, but not SDL_DisplayFormatAlpha

    - by Bounderby
    The following code is intended to display a green square on a black background. It executes, but the green square does not show up. However, if I change SDL_DisplayFormatAlpha to SDL_DisplayFormat the square is rendered correctly. So what don't I understand? It seems to me that I am creating *surface with an alpha mask and I am using SDL_MapRGBA to map my green color, so it would be consistent to use SDL_DisplayFormatAlpha as well. (I removed error-checking for clarity, but none of the SDL API calls fail in this example.) #include <SDL.h> int main(int argc, const char *argv[]) { SDL_Init( SDL_INIT_EVERYTHING ); SDL_Surface *screen = SDL_SetVideoMode( 640, 480, 32, SDL_HWSURFACE | SDL_DOUBLEBUF ); SDL_Surface *temp = SDL_CreateRGBSurface( SDL_HWSURFACE, 100, 100, 32, 0, 0, 0, ( SDL_BYTEORDER == SDL_BIG_ENDIAN ? 0x000000ff : 0xff000000 ) ); SDL_Surface *surface = SDL_DisplayFormatAlpha( temp ); SDL_FreeSurface( temp ); SDL_FillRect( surface, &surface->clip_rect, SDL_MapRGBA( screen->format, 0x00, 0xff, 0x00, 0xff ) ); SDL_Rect r; r.x = 50; r.y = 50; SDL_BlitSurface( surface, NULL, screen, &r ); SDL_Flip( screen ); SDL_Delay( 1000 ); SDL_Quit(); return 0; }

    Read the article

  • Black Screen on pressing back button in libgdx

    - by user26384
    In my game when i touch on advertisements and press back button to return on game the i am getting a black screen. I referred this link. I tried to change IosGraphics.java but the change is not reflected in monotouch project. I did the following : Extracted nightly.zip and opened gdx-backend-iosmonotouch-sources From there I changed IosGraphicsjava. I then made a new jar file gdx-backend-iosmonotouch.jar and replaced it with original jar file in the nightly folder. Compressed all the files from nightly folder in .zip file. Used this .zip file to make a new project throuch gdx-setup-ui.jar. I tried to open my project in monotouch and from com-gdx-backendios.dll i found that the changes in IosGraphics are not being reflected. Am I missing something? How do I solve this? I even tried to open gdx-backend-iosmonotouch-sources.jar with winrar and edit IosGraphics.java and save it. Even this didn't work.

    Read the article

  • How to prioritize related game entity components?

    - by Paul Manta
    I want to make a game where you have to run over a bunch of zombies with your car. When moving around, the zombies have a few things to take into consideration: When there's no player around they might just roam about randomly. And even when some other component dictates a specific direction, they should wobble to the left and right randomly (like drunk people). This implies a small, random, deviation in their movement. They should avoid static obstacles. When they see they are headed towards a wall, they should reorient themselves. They should avoid the car. They should try to predict where the car will be based on its velocity and try to move out of the way. When they can, they should try to get near the player. All these types of decisions they have to do seem like they should be implemented in different components. But how should I manage them? How can I give different components different weights that reflect the importance of each decision (in a given situation)? I would need some other component that acts as a manager, but do you have any tips on how I should implement it? Or maybe there's a better solution?...

    Read the article

  • What's a good way to check that a player has clicked on an object in a 3D game?

    - by imja
    I'm programming a 3D game (using C++ and OpenGL), and I have a few 3D objects in the scene, we can say they are boxes for this example. I want to let the player click on those boxes to select them (ie. they might change color) with the typical restriction like if more than one box is located where the user clicked, only the one closest to the camera would get selected. What would be the best way to do this? The fact that these objects go through several transforms before getting to window coordinates is what makes this a bit tricky. One approach I thought about was that if the player clicks on the screen, I could normalize the x,y coordinates of mouse click and then transform the bounding box coordinates of the objects into clip-space so that I could compare then to the normalized mouse coordinates. I guess I could then do some sort of ray-box collision test to see if any objects lie as the path of the mouse click. I'm afraid I might be over complicating it. Any better methods out there?

    Read the article

  • Game getting progressively laggier?

    - by Valentin Krummenacher
    I have a small game in HTML5 that uses socket.io to communicate with a node.js server. Now my problem is that, eversince I did my last update on it it seems to have something "chunk up" in the background making it laggier and laggier the longer it runs. In the update were a few temporary local variables being defined with var(you know, variables that are only used during one function and then not needed anymore) alongside with alot of other changes, and I am not even sure if this update or something else is causing this. Might the "var" have caused it? Or what other reasons might this strange complication have?

    Read the article

  • How to use GetActiveUniform (in SharpGL)?

    - by frankie
    Generally, guesting is in header. I cannot understand how to use GetActiveUniform function. public void GetActiveUniform(uint program, uint index, int bufSize, int[] length, int[] size, uint[] type, string name); My attempt looks like this (everything is compiled and linked): var uniformSize = new int[1]; var unifromLength = new int[1]; var uniformType = new uint[1]; var uniformName = ""; Gl.GetActiveUniform(Id, index, uniformNameMaxLength[0], unifromLength, uniformSize, uniformType, uniformName); After call I get proper uniformSize, length and type, but not name.

    Read the article

  • Swapping axis labels between 2D and 3D coordinates

    - by Will
    My game world is 3D. The map is only 2D, however. It is natural to think of the map as having an X and Y axis. And it is natural to think of the world has having an X, Y and Z axis, where Y is upwards. That is to say, X Y in 2D map coordinates is X Z in 3D coordinates. What conventions and approaches do you have to keeping things straight at a code level to make mapping between them natural? (Is Y usually upwards in 3D? Or do you have X and Z in map coordinates, or?)

    Read the article

  • Where can I find good (well organized) examples of game code?

    - by smasher
    Where can I find good (well organized) examples of game code? I'm hoping that I can pick up some organizational tips. Most examples in books are too short and leave out lots of detail for the sake of brevity. I'm particularly interested on how to group your variables and methods so that another programmer would know where to look in the code. For example initializers at the top, then methods that take input, then methods that update views. I don't care about a particular language, as long as its OOP. I looked at the Quake 2 and 3 sources, but they're straight C and not much help for getting tips on organizing your objects. So, have you seen some good source? Any pointers to code that makes you say "wow, that's well organized" would be great.

    Read the article

  • How do I draw a 2d plane and rotate camara (To be a board) in a 3d XNA game?

    - by Mech0z
    I am trying to create a simple board game, but the 3d part of this is really killing me. From what I can gather I have created a plane, but it never moves even though I turn the camara, but that partially makes sense as I only turn the camara with a 3d model, but in my head that makes 0 sense, in my head if I turn the camara it should affect ALL my models? But with this code the camara only "cares" about the 3d cylinder, the plane is just completely still private void OnDraw(object sender, GameTimerEventArgs e) { SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); foreach (ModelMesh mesh in cylinderModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { //effect.World = Matrix.CreateRotationX((float)e.TotalTime.TotalSeconds * 2); effect.View = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.EnableDefaultLighting(); } mesh.Draw(); } //cameraPosition.Z -= 5.0f; _effect.World = Matrix.CreateRotationZ((MathHelper.ToRadians(((float)e.TotalTime.Milliseconds / 2) % 360))); foreach (EffectPass pass in _effect.CurrentTechnique.Passes) { pass.Apply(); SharedGraphicsDeviceManager.Current.GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, _vertices, 0, 1, VertexPositionColor.VertexDeclaration); } } Is there a way to get the camara to affect all models?

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • Multiplayer approach for tablets on wi-fi (FPS/TPS)? Server authority, etc

    - by Fraggle
    Looking for some guidance or what has worked well for others in implementing a multiplayer FPS/TPS type game on tablets (probably just 2-6 players at a time). The main issue being that tablets/phones are typically "less" connected than say a console or pc might be. And therefore, my thought is that to have complete Server authority of everything is not going to work. But maybe I'm off base on that. So I guess I'm struggling with what (if anything) should happen on a central server and what should happen locally. Or is centralized approach even needed? Some approaches I might do: Player movement : my thought is to control this locally (player-owner) and update server with positon (which then sends out to other clients). Use client side prediction for opponent players so that connection loss will not show a plane for example stop in mid air. Server will send update and try to smoothly correct an opponent player position to server updated one.But don't update owners position on owners device from server. Powerups (health kit/ammo/coins/etc) : need to see them disappear immediately, so do it locally. Add the health locally, but perhaps allow for server correction. If server doesn't see player near that powerup, reject the powerup and adjust server health for player. Fire weapons: Have to see it happen right away, so fire locally, create local bullet and send on its way. Send rpc to server so that this player on other clients also fires. Hit detection: Get's trickier. Make bullet/projectile disappear locally, and perhaps perform local hit animations (shaking, whatever). non-authoritative approach= take the damage locally and send rpc to server or others to update health and inform of hit. Authoritative approach-Don't take the damage, or adjust health. Server will do that if it detects a hit. Anyway that's my current thought stream. Let me know what you think of the above or what has worked for you.

    Read the article

  • Make a basic running sprite effect

    - by PhaDaPhunk
    I'm building my very first game with XNA and i'm trying to get my sprite to run. Everything is working fine for the first sprite. E.g : if I go right(D) my sprite is looking right , if I go left(A) my sprite is looking left and if I don't touch anything my sprite is the default one. Now what I want to do is if the sprite goes Right, i want to alternatively change sprites (left leg, right leg, left leg etc..) xCurrent is the current sprite drawn xRunRight is the first running Sprite and xRunRight1 is the one that have to exchange with xRunRight while running right. This is what I have now : protected override void Update(GameTime gameTime) { float timer = 0f; float interval = 50f; bool frame1 = false ; bool frame2 = false; bool running = false; KeyboardState FaKeyboard = Keyboard.GetState(); // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); if ((FaKeyboard.IsKeyUp(Keys.A)) || (FaKeyboard.IsKeyUp(Keys.D))) { xCurrent = xDefault; } if (FaKeyboard.IsKeyDown(Keys.D)) { timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds; if (timer > interval) { if (frame1) { xCurrent = xRunRight; frame1 = false; } else { xCurrent = xRunRight1; frame1 = true; } } xPosition += xDeplacement; } Any ideas...? I've been stuck on this for a while.. Thanks in advance and let me know if you need any other part from the code.

    Read the article

  • Do you think that in the future it'll be possible to develop games on OS X by using Python and the latest library "Sprite kit" made by Apple? [on hold]

    - by Cesco
    I don't understand a lot about game engines and modules for Python, even though I'm aware of the existance of PyGame and Pyglets, so please don't bash me too hard if I'll wrote something wrong in this question :-) When I upgraded my Mac to the latest version of OS X, I noticed for the first time that Apple is providing a library named Sprite kit for developing games on both iOS and OS X. It looks to me fairly complete, and the fact is managed by a big company gives me the impression of being well-supported for the time being; in summary, it looks... cool. Actually in order to take advantage of "Sprite kit" you need to code in Obj-C. Since I don't know Obj-C but only a little bit of Python, do you think that there's a chance that sooner or later someone will make a wrapper for Python ? Thank you very much and best regards

    Read the article

  • Help Decide between C#/XNA client or Java

    - by Sparkky
    The game runs on a client/server architecture currently setup for TCP, and the client code was built in AS3 to be web based. What we're running into is 3 problems for the client. AS3 has no hardware acceleration so we are having some issues with slowdown when implementing some features TCP is really frustrating for a sidescroller when you're talking with a server. I'm having a heck of a time with the interpolation/extrapolation to make everyone else look smooth while minimizing lag. I would much rather be able to use UDP and throw in something similar to the age old Quake interpolation/extrapolation. No right click I work professionally with C#, and I did all my University (almost 2 years ago) with Java. Java really appeals to me because of the compatability while C# appeals to me because I've heard so much good about XNA and I love visual studio. For a Client/Server based MMOish sidescroller in your opinion should I stick with AS3 and the TCP protocol, or should I abandon some of my audience, ramp up the graphics and hit C#, or journey back to the land of Java. Thanks :D

    Read the article

  • Which Kinect package for PC takes care of motion tracking too?

    - by Extrakun
    I am aware that there are opensource drivers for interfacing Kinect with the PC. My question is - the drivers at OpenKinect seems to provide only the images and depth data (from the reading of their wiki and API). It seems that you need to provide your own imaging solution. My question is - is there any all-in-one package, with samples/sources that not only grab images from Kinect, but also do the imaging/motion detection for you?

    Read the article

  • Common way to store model transformations

    - by redreggae
    I ask myself what's the best way to store the transformations in a model class. What I came up with is to store the translation and scaling in a Vector3 and the rotation in a Matrix4. On each update (frame) I multiply the 3 matrices (first build a Translation and Scaling Matrix) to get the world matrix. In this way I have no accumulated error. world = translation * scaling * rotation Another way would be to store the rotation in a quaternion but then I would have a high cost to convert to a matrix every time step. If I lerp the model I convert the rotation matrix to quaternion and then back to matrix. For speed optimization I have a dirty flag for each transformation so that I only do a matrix multiplication if necessary. world = translation if (isScaled) { world *= scaling } if (isRotated) { world *= rotation } Is this a common way or is it more common to have only one Matrix4 for all transformations? And is it better to store the rotation only as quaternion? For info: Currently I'm building a CSS3D engine in Javascript but these questions are relevant for every 3D engine.

    Read the article

  • How to pause and resume a game in XNA using the same key?

    - by user13095
    I'm attempting to implement a really simple game state system, this is my first game - trying to make a Tetris clone. I'd consider myself a novice programmer at best. I've been testing it out by drawing different textures to the screen depending on the current state. The 'Not Playing' state seems to work fine, I press Space and it changes to 'Playing', but when I press 'P' to pause or resume the game nothing happens. I tried checking current and previous keyboard states thinking it was happening to fast for me to see, but again nothing seemed to happen. If I change either the pause or resume, so they're both different, it works as intended. I'm clearly missing something obvious, or completely lacking some know-how in regards to how update and/or the keyboard states work. Here's what I have in my Update method at the moment: protected override void Update(GameTime gameTime) { KeyboardState CurrentKeyboardState = Keyboard.GetState(); // Allows the game to exit if (CurrentKeyboardState.IsKeyDown(Keys.Escape)) this.Exit(); // TODO: Add your update logic here if (CurrentGameState == GameStates.NotPlaying) { if (CurrentKeyboardState.IsKeyDown(Keys.Space)) CurrentGameState = GameStates.Playing; } if (CurrentGameState == GameStates.Playing) { if (CurrentKeyboardState.IsKeyDown(Keys.P)) CurrentGameState = GameStates.Paused; } if (CurrentGameState == GameStates.Paused) { if (CurrentKeyboardState.IsKeyDown(Keys.P)) CurrentGameState = GameStates.Playing; } base.Update(gameTime); }

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics, and I'd like to know the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have a 2D png for players' placeholders, and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However, it would be a waste of memory and thus less efficient. What do you suggest?

    Read the article

  • Tutorial on OpenGL texture formats

    - by Cyan
    Looking at the documentation glGetTexImage(), one can see that there are plenty of available texture formats. GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, GL_TEXTURE_1D_ARRAY, GL_TEXTURE_2D_ARRAY, GL_TEXTURE_RECTANGLE, GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z I've only used GL_TEXTURE_2D for the time being. Is there any place / documentation where one can learn about these other formats ? PS : and yes, of course, i've googled for it, results are pretty poor

    Read the article

  • How are dependant quests generated in Guild Wars 2?

    - by Aufziehvogel
    I recently read that Guild Wars 2 uses a system where the creation of quests depends on which actions user took when they were presented another quest. An example was: There might be a quest to protect a person. If users do not take this action, the person might be kidnapped and later there is a quest to rescue this person. Is there any information on whether the creation of these quests is somehow automatic? From the article it sounded like automatically, but from the specific example you could also guess that people just created a task-set where they added conditions (Task 1 taken: OK; Task 1 not taken: Show Task 2). From what I heard about AI they might also have implemented some sort of a huge neural network to make decisions?

    Read the article

  • Presenting game center leaderboard

    - by Coder404
    I have the following code: -(void)showLeaderboard { GKLeaderboardViewController *leaderboardController = [[GKLeaderboardViewController alloc] init]; if (leaderboardController != NULL) { leaderboardController.timeScope = GKLeaderboardTimeScopeAllTime; leaderboardController.leaderboardDelegate = self; [self presentModalViewController: leaderboardController animated: YES]; } [leaderboardController release]; } Which I am trying to use to make my leader-board pop up. The issue is cocos2d will not allow me to use the line: [self presentModalViewController: leaderboardController animated: YES]; How do I fix this? Thanks PS I am using cocos2d

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >