Search Results

Search found 26124 results on 1045 pages for 'unreal development kit'.

Page 564/1045 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • which tile size to choice for 16-bits [on hold]

    - by billy
    Before I make my 16-bits game, I want to clear some stuff up so I dont run into problems later. first question: when making a 16-bits game all i need to do is have 16-bit sprites image(.png or .jpg)? and for 8-bits it is .gif 2nd question is: which tile size is good for 16-bits or it doesnt matter? Right now I am using 30x30 pixels for map tile set. and 40x40 pixels for player, enemies etc.. 3rd question is: what is screen size for 16-bits in most games? I am using 640x480 pixels.

    Read the article

  • Knockback enemy based off of direction sprite is facing

    - by pengume
    Hey Everyone, Today I am trying to make it so if I hit the enemy then the enemy well be knocked backwards in the direction the sprite is facing. I am rotating the sprite around 360 degrees using a joystick on the screen and wanted to know the best practice or ways to accomplish this. I have come up with a few ideas but none of them make use of the sprites angle he is facing just a check to see if I hit the bottom then move him upward and so forth. I am just stumped on how to apply the sprites angle to the enemies x and y coordinate and move him accordingly. Has anyone tried this and have suggestions or things to look for? Thanks in advance.

    Read the article

  • Physics Engine [Collision Response, 2-dimensional] experts, help!! My stack is unstable!

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they hardly stand still! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have: I would try adding a damping term (proportional to velocity) to the Baumgarte. Is this a good idea in general? If not I would not want to waste my time trying to tune the parameter hoping it magically works. Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :)

    Read the article

  • Keep Getting Syntax Error C2199?

    - by DARK3ZOOZ
    Here's my problem I'm trying to define something, but keep getting a syntax error Code: #define R_RegisterShader 0x50C8A0 int (*trap_R_RegisterShader)( const char *name, int Arg_1 ) = (int (_cdecl *)(const char *, int ))R_RegisterShader; ^^^^^^^ This last part is where I keep getting the error if you need more lines of codes, just let me know. thanks http://gyazo.com/1a47ebc12cfbd6ea72feb72c686ae84d screenshot of error

    Read the article

  • implementing a high level function in a script to call a low level function in the game engine

    - by eat_a_lemon
    In my 2d game engine I have a function that does pathfinding, find_shortest_path. It executes for each time step in the game loop and calculates the next coordinate pair in the series of coordinates to reach the destination object. Now I want to call this function in a scripting language and have it only return the last coordinate pair result. I want the game engine to go about the business of rendering the incremental steps but I don't want the high level script to care about the rendering. The high level script is only for ai game logic. Now I know how to bind a method from C to python but how can I signal and coordinate the wait time between the incremental steps without the high level function returning until its time for the last step?

    Read the article

  • How do I retain previously drawn graphics?

    - by Cromanium
    I've created a simple program that draws lines from a fixed point to a random point each frame. I wanted to keep each line on the screen. However, it always seems to be cleared each time it draws on the spriteBatch even without GraphicsDevice.Clear(color) being called. What seems to be the problem? protected override void Draw(GameTime gameTime) { spriteBatch.Begin(); DrawLine(spriteBatch); spriteBatch.End(); base.Draw(gameTime); } private void DrawLine(SpriteBatch spriteBatch) { Random r = new Random(); Vector2 a = new Vector2(50, 100); Vector2 b = new Vector2(r.Next(0, 640), r.Next(0,480)); Texture2D filler= new Texture2D(GraphicsDevice, 1, 1, false, SurfaceFormat.Color); filler.SetData(new[] { Color.Black }); float length = Vector2.Distance(a, b); float angle = (float)Math.Atan2(b.Y - a.Y, b.X - a.X); spriteBatch.Draw(filler, a, null, Color.Black, angle, Vector2.Zero, new Vector2(length,10.0f), SpriteEffects.None, 0f); } What am I doing wrong?

    Read the article

  • Lwjgl camera causing movement to be mirrored

    - by pangaea
    I'm having a problem in that everything is rendered and the movement is fine. However, everything seems to be mirrored. In the sense that the TriangleMob should move towards me, but it doesn't instead it mirrors my action. I move forward the TriangleMob moves backwards. I move left, it moves right. I move backwards, it moves forward. The code works if I do this glPushMatrix(); glTranslatef(-position.x, -position.y, -position.z); glCallList(objectDisplayList); glPopMatrix(); However, I'm scared this will cause a problem later on. I suppose the code works. However, shouldn't the call be glPushMatrix(); glTranslatef(position.x, position.y, position.z); glCallList(objectDisplayList); glPopMatrix(); I think the problem could be caused by how I'm doing the camera, which is this glLoadIdentity(); glRotatef(player.getRotation().x, 1.0f, 0.0f, 0.0f); glRotatef(player.getRotation().y, 0.0f, 1.0f, 0.0f); glRotatef(player.getRotation().z, 0.0f, 0.0f, 1.0f); glTranslatef(player.getPosition().x, player.getPosition().y, player.getPosition().z);

    Read the article

  • Determining whether two fast moving objects should be submitted for a collision check

    - by dreta
    I have a basic 2D physics engine running. It's pretty much a particle engine, just uses basic shapes like AABBs and circles, so no rotation is possible. I have CCD implemented that can give accurate TOI for two fast moving objects and everything is working smoothly. My issue now is that i can't figure out how to determine whether two fast moving objects should even be checked against each other in the first place. I'm using a quad tree for spacial partitioning and for each fast moving object, i check it against objects in each cell that it passes. This works fine for determining collision with static geometry, but it means that any other fast moving object that could collide with it, but isn't in any of the cells that are checked, is never considered. The only solution to this i can think of is to either have the cells large enough and cross fingers that this is enough, or to implement some sort of a brute force algorithm. Is there a proper way of dealing with this, maybe somebody solved this issue in an efficient manner. Or maybe there's a better way of partitioning space that accounts for this?

    Read the article

  • Particle system lifetimes in OpenGL ES 2

    - by user16547
    I don't know how to work with my particle's lifetimes. My design is simple: each particle has a position, a speed and a lifetime. At each frame, each particle should update its position like this: position.y = position.y + INCREMENT * speed.y However, I'm having difficulties in choosing my INCREMENT. If I set it to some sort of FRAME_COUNT, it looks fine until FRAME_COUNT has to be set back to 0. The effect will be that all particles start over at the same time, which I don't want to happen. I want my particles sort of live "independent" of each other. That's the reason I need a lifetime, but I don't know how to make use of it. I added a lifetime for each particle in the particle buffer, but I also need an individual increment that's updated on each frame, so that when PARTICLE_INCREMENT = PARTICLE_LIFETIME, each increment goes back to 0. How can I achieve something like that?

    Read the article

  • SDL: How would I add tile layers with my area class as a singleton?

    - by Tony
    I´m trying to wrap my head around how to get this done, if at all possible. So basically I have a Area class, Map class and Tile class. My Area class is a singleton, and this is causing some confusion. I´m trying to draw like this: Background / Tiles / Entities / Overlay Tiles / UI. void C_Application::OnRender() { // Fill the screen black SDL_FillRect( Surf_Screen, &Surf_Screen->clip_rect, SDL_MapRGB( Surf_Screen->format, 0x00, 0x00, 0x00 ) ); // Draw background // Draw tiles C_Area::AreaControl.OnRender(Surf_Screen, -C_Camera::CameraControl.GetX(), -C_Camera::CameraControl.GetY()); // Draw entities for(unsigned int i = 0;i < C_Entity::EntityList.size();i++) { if( !C_Entity::EntityList[i] ) { continue; } C_Entity::EntityList[i]->OnRender( Surf_Screen ); } // Draw overlay tiles // Draw UI // Update the Surf_Screen surface SDL_Flip( Surf_Screen); } Would be nice if someone could give a little input. Thanks.

    Read the article

  • Rendering projectiles with DirectX and C++

    - by Chris
    I'm working on a simple game that has the user control a space ship that shoots small circular projectiles. However, I'm not sure how to render these. Right now I know how to make a LPDIREC3DSURFACE for a sprite and render it onto a LPDIRECT3DDEVICE9, but that's only for a single sprite. I assume I don't need to constantly create new surfaces and devices. How should projectile generation/rendering be handled? Thanks in advance.

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • How can you put all images from a game to 1 file?

    - by ThePlan
    I've just finished a basic RPG game written in C++ SFML, I've put a lot of effort into the game and I'd want to distribute it, however I've ran into a small issue. Problem is, I have well over 200 images and map files (they're .txt files which hold map codes) all in the same folder as the executable, when I look in the folder, it makes me want to cry a little bit seeing so many resources, I've never seen a game which shows you all the resources directly, instead I believe they pack the resources in a certain file. Well, that's what I'm trying to achieve: I'm hoping to pack all the images in 1 file (Maybe the .txt files as well) then being able to read from that file or easily add to it.

    Read the article

  • What's a good way to store a series of interconnected pipe and tank objects?

    - by mars
    I am working on a puzzle game with a 6 by 6 grid of storage tanks that are connected to up to 4 tanks adjacent to them via pipes. The gameplay is concerned with combining what's in a tank with an adjacent tank via the pipe than interconnects them. Right now I store the tanks in a 6x6 array, vertical pipes in a 5x6 array, and horizontal pipes in a 6x5 array. Each tank has a reference to the object that contains both tanks and pipes and when a pipe needs to be animated filling with liquid, the selected tank just calls a method on the container object telling it to animate the pipes it is connected to (subtract 1 from row or column to find connected pipes). This feels like the wrong way of doing it, as I've also considered just giving each tank references to the pipes connected to it to access directly.

    Read the article

  • How can I efficiently create/store/implement animations as I add to my game?

    - by nickbadal
    My game's characters are made up of different parts (head/body/legs/etc), and whatever items they have equipped. As I'm creating the animation system for my game, I want to try to anticipate a large number of combinations for different pieces for each character. Originally, I had planned on having a frame-by frame animation for each piece for each animation, and then layer them to combine them into a character, but this seems like it would be a lot of work for my artist, and that the memory/disk size would start to add up as well since we would need a sprite for every frame, of every customization of every piece, in every animation, for every character. What efficient ways are there to create/implement these animations as we add more and more configurations to our game?

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • what's wrong with my lookAt and move forward code?

    - by alaslipknot
    so am still in the process of getting familiar with libGdx and one of the fun things i love to do is to make basics method for reusability on future projects, and for now am stacked on getting a Sprite rotate toward target (vector2) and then move forward based on that rotation the code am using is this : // set angle public void lookAt(Vector2 target) { float angle = (float) Math.atan2(target.y - this.position.y, target.x - this.position.x); angle = (float) (angle * (180 / Math.PI)); setAngle(angle); } // move forward public void moveForward() { this.position.x += Math.cos(getAngle())*this.speed; this.position.y += Math.sin(getAngle())*this.speed; } and this is my render method : @Override public void render(float delta) { // TODO Auto-generated method stub Gdx.gl.glClearColor(0, 0, 0.0f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // groupUpdate(); Vector3 mousePos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); ball.lookAt(new Vector2(mousePos.x, mousePos.y)); // if (Gdx.input.isTouched()) { ball.moveForward(); } batch.begin(); batch.draw(ball.getSprite(), ball.getPos().x, ball.getPos().y, ball .getSprite().getOriginX(), ball.getSprite().getOriginY(), ball .getSprite().getWidth(), ball.getSprite().getHeight(), .5f, .5f, ball.getAngle()); batch.end(); } the goal is to make the ball always look at the mouse cursor, and then move forward when i click, am also using this camera : // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); aaaand the result was so creepy lol Thank you

    Read the article

  • Problem texturing with opengl

    - by Killrazor
    Hello! I'm having problems making a simple sprite rendering. I load 2 different textures. Then, I bind these textures and draw 2 squares, one with each texture. But only the texture of the first rendered object is drawn in both squares. Its like if I'd only use a texture or as if glBindTexture don't work properly. I know that GL is a state machine, but I think that you only need to change active texture with glBindTexture. I load texture with this method: bool CTexture::generate( utils::CImageBuff* img ) { assert(img); m_image = img; CHECKGL(glGenTextures(1,&m_textureID)); CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR)); //CHECKGL(glTexImage2D(GL_TEXTURE_2D,0,img->getBpp(),img->getWitdh(),img->getHeight(),0,img->getFormat(),GL_UNSIGNED_BYTE,img->getImgData())); CHECKGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->getWitdh(), img->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img->getImgData())); return true; } And I bind textures with this function: void CTexture::bind() { CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); } Also, I draw sprites with this method void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_QUADS)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } Could you help me? All help is welcome. Thanks!!

    Read the article

  • I finished "Beginning Android Games", should I use its framework?

    - by orod
    I've worked through Mario Zechner's "Beginning Android Games" and have made my own pong and asteroids game using the framework used in the book. I have also downloaded the source code for Replica Island and am able to run that. I like Replica Island's framework over the one I made from reading the book. Some differences are that Replica Island uses different activities for each screen instead of Zechner's Screen class and that Replica Island can use a lot of textures and isn't limited to textures with dimensions of powers of 2. If I'm serious about writing games and apps for Android should I learn Replica Island's framework and use that instead of the one I made while reading Zechner's book?

    Read the article

  • How to design 2D collision callback methods?

    - by Ahmed Fakhry
    In a 2D game where you have a lot of possible combination of collision between objects, such as: object A vs object B = object B vs A; object A vs object C = object C vs A; object A vs object D = object D vs A; and so on ... Do we need to create callback methods for all single type of collision? and do we need to create the same method twice? Like, say a bullet hits a wall, now I need a method to penetrate the wall for the wall, and a method to destroy the bullet for the bullet!! At the same time, a bullet can hit many objects in the game, and hence, more different callback methods!!! Is there a design pattern for that?

    Read the article

  • How to draw a texture to a MS Terrain object - Farseer

    - by Brad
    I'm using Farseer to make a game in XNA and I can't seem to figure this out. I've decided to use MSTerrain for making my game's terrain because I wanted destructible terrain and MSTerrain seemed like the best bet. Unfortunately, I'm stumped on how to actually show the terrain. When I generate the terrain it's visible in debug view, but MSTerrain does not have a Draw method, so I'm wondering how it is supposed to be drawn to the screen? Is it worth pursuing? I'm starting to think that MSTerrain is more trouble than it's worth, is there another better way to do this with bodies? I appreciate any help you can give. Thanks.

    Read the article

  • Tile transitions - external vs internal

    - by omgnoseat
    I've been looking at a couple of games and noticed that the transitions between tiles are handled somewhat different. I was wondering which methods are to be used in different situations and why. I'm currently using internal edges in a top-down game, and it's working out so far. But I don't want to run into problems later on, and have to redo the whole tileset. I noticed that platforming games mostly use the internal edges, and top-down games mostly use external and hybrid transitions. I can see how these tiles are used to create "depth" in top-down games, where the player apears to be standing in front of a wall for example. But it seems unlikely that such a small feature decides the entire method for tile transitions. You could always alter the bounding box to create the same effect.

    Read the article

  • Why are prefab customizations being applied to only a single character? [on hold]

    - by m0rgul
    I've built my (networked) game to the point where I have a room, some characters and a character customization screen. In the character customization screen I get to chose gender, chose from three different wardrobe options and assign custom colors to these wardrobe items. Then I use a custom object to store these options, serialize them and store them in PlayerPrefs, then load the next scene and apply them to my chosen character in this scene. Then I go and spawn some more characters, customize them, etc. The problem is that my character is always customized, but all other characters in the scene are default copies of the prefab. The prefabs themselves are a generic male and a generic female, both with three different wardrobes to chose from (they are included in the prefab). When I spawn my character, I go to the saved options in PlayerPrefs, destroy the two discarded wardrobes, apply color to my chosen wardrobe and then spawn the character. How would it be possible to make the other characters also show up in their customized form rather than just the generic prefab (which shows all three wardrobes at the same time)?

    Read the article

  • What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

    - by Ogapo
    I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress. This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no? Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >