Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 557/1332 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Spherical to Cartesian Coordinates

    - by user1258455
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Tunnels in pseudo 3D racing game

    - by Nicholas
    How would one go about doing tunnels in a pseudo 3D racing game ? The main problem I have at the moment is perspective - I cant think of a way, beyond having to Z sort the sprites and tunnel coordinates, so that vehicles are displayed in front of the tunnel entrance and somehow block the display when out of site. I would like my tunnels to be used on both flat, curved and hills and slopes. The tunnel enterance/exit is made up of 3 separate graphics, (left, right and top), whilst inside the tunnel it is just one line graphic along the top (the idea being its supposed to be a set distance above the current vertical road position). As you can see from the picture, the vehicles are still being rendered whilst in the tunnel. I've converted the Code Incomplete road system to GLBasic.

    Read the article

  • How to obtain window handle in SDL 2.0.3

    - by Diorthotis
    I need to obtain the handle of the window for SDL 2.0.3. I got the suggestion to use info.window after initializing SDL and filling the info variable with data by calling SDL_GetWindowWMInfo(); included in the header file SDL_syswm.h. My compiler (visual studio 2008 professional edition) gives the following error: 226) : error C2039: 'window' : is not a member of 'SDL_SysWMinfo' 1 include\sdl_syswm.h(173) : see declaration of 'SDL_SysWMinfo' Any help appreciated. Thanks. Nevermind, I needed to use "info.info.win.window". That seems a bit redundant, but whateves.

    Read the article

  • How does this circle collision detection math work?

    - by Griffin
    I'm going through the wildbunny blog to learn about collision detection. I'm confused about how the vectors he's talking about come into play. Here's the part that confuses me: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. I understand that p is the distance by which the circles penetrate, but I don't get what exactly N and P are. It seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||). What's the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Cocos2d: Adding a CCSequence to a CCArray

    - by Axort
    I have a problem with an action performed by a sprite. I have one CCSequence in a CCArray and I have an scheduled method (is called every 5 seconds) that make the sprite run the action. The action is performed correctly only the first time (the first 5 seconds), after that, the action do whatever it wants lol. Here is the code: In .h - @interface PowerUpLayer : CCLayer { PowerUp *powerUp; CCArray *trajectories; } @property (nonatomic, retain) CCArray *trajectories; In .mm - @implementation PowerUpLayer @synthesize trajectories; -(id)init { if((self = [super init])) { [self createTrajectories]; self.isTouchEnabled = YES; [self schedule:@selector(spawn:) interval:5]; } return self; } -(void)createTrajectories { self.trajectories = [CCArray arrayWithCapacity:1]; //Wave trajectory ccBezierConfig firstWave, secondWave; firstWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width + 30, [[CCDirector sharedDirector] winSize].height / 2);//powerUp.sprite.position.x, powerUp.sprite.position.y); firstWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width - ([[CCDirector sharedDirector] winSize].width / 4), 0); firstWave.endPosition = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width / 4, [[CCDirector sharedDirector] winSize].height); secondWave.endPosition = CGPointMake(-30, [[CCDirector sharedDirector] winSize].height / 2); id bezierWave1 = [CCBezierTo actionWithDuration:1 bezier:firstWave]; id bezierWave2 = [CCBezierTo actionWithDuration:1 bezier:secondWave]; id waveTrajectory = [CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]; [self.trajectories addObject:waveTrajectory]; //[powerUp.sprite runAction:bezierForward]; // [CCMoveBy actionWithDuration:3 position:CGPointMake(-[[CCDirector sharedDirector] winSize].width - powerUp.sprite.contentSize.width, 0)] //[powerUp.sprite runAction:[CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; } -(void)setInvisible:(id)sender { if(powerUp != nil) { [self removeChild:sender cleanup:YES]; powerUp = nil; } } This is the scheduled method: -(void)spawn:(ccTime)dt { if(powerUp == nil) { powerUp = [[PowerUp alloc] initWithType:0]; powerUp.sprite.position = CGPointMake([[CCDirector sharedDirector] winSize].width + powerUp.sprite.contentSize.width, [[CCDirector sharedDirector] winSize].height / 2); [self addChild:powerUp.sprite z:-1]; [powerUp.sprite runAction:((CCSequence *)[self.trajectories objectAtIndex:0])]; } } I don't know what is happening; I never modify the content of the CCSequence after the first time. Thanks!

    Read the article

  • What is the best way to exploit multicores when making multithread games?

    - by Keeper
    Many people suggest to write a program, and then start optimizing it. But I think that when it's coming to multithreading with multicore, a little think ahead is required. I've read about using threads, and experienced it myself during some courses at the university (still a student). The big question is simple, but a bit abstract: What thread related steps in game design do I need to take, before implementation? Now trying to be more specific. Let's say, as an example, that I'm making a small board game (like Monopoly) that I want to be multithreaded. My goal Is that this multithreaded game will exploit the best of the multicore system, lets say 4-6 cores (like in i7 processors). My answer to this question at the moment is, one thread for each of these four basic components: GUI User Input / Output AI (computer rival) Other game related calculations (like shortest path from A to B, or level up status change) I'm not an expert (yet!), and I'm sure there are better answers out there. Any suggestion, answer, different approach will be helpful. Some thoughts: Maybe splitting the main database is a good way.. (or total disaster.. )

    Read the article

  • Grid Based Lighting in XNA/Monogame

    - by sm81095
    I know that questions like this have been asked many times, but I have not found one exactly like this yes. I have implemented a top-down grid based world in Monogame, and am starting on the lighting system soon. How I want to do lighting is to have a grid that is 4 times wider and higher, basically splitting each world tile into a 4x4 system of "subtiles". I would like to use a flow like system to spread light across the tiles by reducing the light by a small amount each time. This is kind of the effect I was going for: http://i.imgur.com/rv8LCxZ.png The black grid lines are the light grid, and the red lines are the actual tile grid, and the light drop-off is very exaggerated. I plan to render the world by drawing the unlit grid to a separate RenderTarget2D, then rendering the lighting grid to a separate target and overlaying the two. Basically, my questions are: What would be the algorithm for a flow style lighting system like this? Would there be a more efficient way of rendering this? How would I handle the darkening of the light with colors, reducing the RGB values in each grid, or reducing the alpha in each grid, assuming that I render the light map over the grid using blending? Even assuming the former are possible, what BlendState would I use for that?

    Read the article

  • Is it possible to generate Events and Hooks in Lua for any game without built-in support?

    - by pr0tocol
    Does a game have to have built-in functions to accept and run lua scripts, or can I design Events and Hooks using Lua on any game I please, akin to the days where C code could be used to hook into the WinAPI using dlls? The reason I ask is, I am trying to create a background application that will perform events and hooks on a particular game that does not currently support lua in-game. Brief examples: Events: - An action executed by the PLAYER is detected. For instance, hitting the Q key will normally make my character use an ability, but with my Lua script running in the background, will cause a sound to play on my computer (or something). Hooks: - An action within the GAME is detected. For instance, the game spawns an enemy every minute. When an enemy spawns, the script will detect this and perform an action, for instance playing a sound locally on the computer. I would like to do both, but I know for games like Garry's Mod, the game already has built-in support for running lua scripts. Is there a way to do either events OR hooks using lua similarly to how C/C++ can connect to a game using WinAPI dlls?

    Read the article

  • Client side prediction/simulation Question

    - by Legendre
    I found a related question but it doesn't have what I needed. Client A sends input to move at T0. Server receives input at T1. All clients receive the change at T2. Question: With client-side prediction, client A would start moving at T0, client-side. All other clients receive the change at T2, so to them, client A only started moving at T2. If I understand correctly, client B will always see client A's past position and not his current position? How do I sync both client B and client A?

    Read the article

  • One-way platform collision

    - by TheBroodian
    I hate asking questions that are specific to my own code like this, but I've run into a pesky roadblock and could use some help getting around it. I'm coding floating platforms into my game that will allow a player to jump onto them from underneath, but then will not allow players to fall through them once they are on top, which require some custom collision detection code. The code I have written so far isn't working, the character passes through it on the way up, and on the way down, stops for a moment on the platform, and then falls right through it. Here is the code to handle collisions with floating platforms: protected void HandleFloatingPlatforms(Vector2 moveAmount) { //if character is traveling downward. if (moveAmount.Y > 0) { Rectangle afterMoveRect = collisionRectangle; afterMoveRect.Offset((int)moveAmount.X, (int)moveAmount.Y); foreach (World_Objects.GameObject platform in gameplayScreen.Entities) { if (platform is World_Objects.Inanimate_Objects.FloatingPlatform) { //wideProximityArea is just a rectangle surrounding the collision //box of an entity to check for nearby entities. if (wideProximityArea.Intersects(platform.CollisionRectangle) || wideProximityArea.Contains(platform.CollisionRectangle)) { if (afterMoveRect.Intersects(platform.CollisionRectangle)) { //This, in my mind would denote that after the character is moved, its feet have fallen below the top of the platform, but before he had moved its feet were above it... if (collisionRectangle.Bottom <= platform.CollisionRectangle.Top) { if (afterMoveRect.Bottom > platform.CollisionRectangle.Top) { //And then after detecting that he has fallen through the platform, reposition him on top of it... worldLocation.Y = platform.CollisionRectangle.Y - frameHeight; hasCollidedVertically = true; } } } } } } } } In case you are curious, the parameter moveAmount is found through this code: elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; float totalX = 0; float totalY = 0; foreach (Vector2 vector in velocities) { totalX += vector.X; totalY += vector.Y; } velocities.Clear(); velocity.X = totalX; velocity.Y = totalY; velocity.Y = Math.Min(velocity.Y, 1000); Vector2 moveAmount = velocity * elapsed;

    Read the article

  • Simple 2D Flight Physics with Box2D

    - by MarkPowell
    I'm trying to build a simple side scroller with an airplane being the player. As such, I want to build simple flight controls with simple but realistic-feeling physics. I'm making use of cocos2D and Box2D. I have a basic system working, but just can't get the physics feeling correct. I am applying force to the plane (which is a b2CircleShape) based on the user's input. So, basically, if the user pushes up, body_->ApplyForce(b2Vec2(10,30), body_->GetPosition()) is called. Similarly, for down -30 is used. This works and the plane flys along with up/down causing it to dive or climb. But it just doesn't feel right. There is no slowdown on climbs, nor speed up during dives. My simple solution is far to simple. How can I get a better feel for a plane climbing/diving? Thanks!

    Read the article

  • Determine how to display a tile based on surrounding tiles

    - by Jsmith
    I have a game engine which generates maps randomly, set on a 2d grid which is composed of 34px square graphical tiles. These tiles can be displayed in any of three ways, wall, corner, and floor(exists in 2 states, passable and impassible), and four directions, north, south, west and east. What I need to do is, based on the tiles around each individual tile, determine which state to display the tile in, e.g. north wall, northeast corner, floor so that when a player alters the map, the tiles around the affected tile adjust themselves to suit(i.e. tunneling). In case it becomes important, all gameobjects are inherited from the same class, whether they be players, NPC's, walls, or items.

    Read the article

  • Bending of track in a racing game

    - by caius
    I am trying to create a small racing game in which the track would be modeled using a BSpline curve for the path's center line and directional vectors to define the 'bending' of the track at each point. My problem is that I don't know how to calculate the correct bending / slope of the curve, in such a way that it would be optimal or at least visually nice for a car to 'bend in the corner'. My idea was to use the direction of the 2nd derivatives of the curve, however while this approach looks fine for most of the track, there are points in which the 2nd derivative makes sharp 'twists' / very quick 180 degree flips. I also read about 'knots' of bsplines, but I don't know if such 'twist' in 2nd derivatives is a knot or knots are something else. Can you tell me that using a BSpline: 1. How could I calculate a visually nice bending of a track for a racing game? 2. Is it possible to do this by using some simple calculations of centripertal force / gravity? 3. Is it possible to do this by using 1st, 2nd and 3rd derivatives of the BSpline curve? I am not looking for the 'physically correct' bending angle for the track, I would just like to create something which is visually pleasing in a simple game. I am using a framework which has a built-in class for BSpline, including support for 1st, 2nd and 3rd derivatives of the curve.

    Read the article

  • Unity: Assigning String value in inspector

    - by Marc Pilgaard
    I got an issue with Unity I can't seem to comprehend, and it is possibly very simple: I am trying to write a simple piece of code in JavaScript where a button toggles the activation of a shield, by dragging a prefab with Resources.load("ActivateShieldPreFab") and destroying it again (Haven't implemented that yet). I wish to assign this button through the inspector, so I have created a string variable which appears as intended in the inspector. Though it doesn't seem to register the inspector input, even though I changed the value through the inspector. It only provides the error: "Input Key named: is unknown" When the button name is assigned within the code, there is no issues. Code as follows: var ShieldOn = false; var stringbutton : String; function Start(){ } function Update () { if(Input.GetKey(stringbutton) && ShieldOn != true) { Instantiate(Resources.load("ActivateShieldPreFab"), Vector3 (0, 0, 0), Quaternion.identity); ShieldOn = true; } } Hope somebody can help, in advance... Thanks

    Read the article

  • Bounding Box Collision Glitching Problem (Pygame)

    - by Ericson Willians
    So far the "Bounding Box" method is the only one that I know. It's efficient enough to deal with simple games. Nevertheless, the game I'm developing is not that simple anymore and for that reason, I've made a simplified example of the problem. (It's worth noticing that I don't have rotating sprites on my game or anything like that. After showing the code, I'll explain better). Here's the whole code: from pygame import * DONE = False screen = display.set_mode((1024,768)) class Thing(): def __init__(self,x,y,w,h,s,c): self.x = x self.y = y self.w = w self.h = h self.s = s self.sur = Surface((64,48)) draw.rect(self.sur,c,(self.x,self.y,w,h),1) self.sur.fill(c) def draw(self): screen.blit(self.sur,(self.x,self.y)) def move(self,x): if key.get_pressed()[K_w] or key.get_pressed()[K_UP]: if x == 1: self.y -= self.s else: self.y += self.s if key.get_pressed()[K_s] or key.get_pressed()[K_DOWN]: if x == 1: self.y += self.s else: self.y -= self.s if key.get_pressed()[K_a] or key.get_pressed()[K_LEFT]: if x == 1: self.x -= self.s else: self.x += self.s if key.get_pressed()[K_d] or key.get_pressed()[K_RIGHT]: if x == 1: self.x += self.s else: self.x -= self.s def warp(self): if self.y < -48: self.y = 768 if self.y > 768 + 48: self.y = 0 if self.x < -64: self.x = 1024 + 64 if self.x > 1024 + 64: self.x = -64 r1 = Thing(0,0,64,48,1,(0,255,0)) r2 = Thing(6*64,6*48,64,48,1,(255,0,0)) while not DONE: screen.fill((0,0,0)) r2.draw() r1.draw() # If not intersecting, then moves, else, it moves in the opposite direction. if not ((((r1.x + r1.w) > (r2.x - r1.s)) and (r1.x < ((r2.x + r2.w) + r1.s))) and (((r1.y + r1.h) > (r2.y - r1.s)) and (r1.y < ((r2.y + r2.h) + r1.s)))): r1.move(1) else: r1.move(0) r1.warp() if key.get_pressed()[K_ESCAPE]: DONE = True for ev in event.get(): if ev.type == QUIT: DONE = True display.update() quit() The problem: In my actual game, the grid is fixed and each tile has 64 by 48 pixels. I know how to deal with collision perfectly if I moved by that size. Nevertheless, obviously, the player moves really fast. In the example, the collision is detected pretty well (Just as I see in many examples throughout the internet). The problem is that if I put the player to move WHEN IS NOT intersecting, then, when it touches the obstacle, it does not move anymore. Giving that problem, I began switching the directions, but then, when it touches and I press the opposite key, it "glitches through". My actual game has many walls, and the player will touch them many times, and I can't afford letting the player go through them. The code-problem illustrated: When the player goes towards the wall (Fine). When the player goes towards the wall and press the opposite direction. (It glitches through). Here is the logic I've designed before implementing it: I don't know any other method, and I really just want to have walls fixed in a grid, but move by 1 or 2 or 3 pixels (Slowly) and have perfect collision without glitching-possibilities. What do you suggest?

    Read the article

  • I'm looking to learn how to apply traditional animation techniques to my graphics engine - are there any tutorials or online-resources that can help?

    - by blueberryfields
    There are many traditional animation techniques - such as blurring of motion, motion along an elliptical curve rather than a straight line, counter-motion before beginning of movement - which help with creating the appearance of a realistic 3D animated character. I'm looking to incorporate tools and short cuts for some of these into my graphics engine, to make it easier for my end users to use these techniques in their animations. Is there a good resource listing the techniques and the principles behind them, especially how they might apply to a graphics engine or 3D animation?

    Read the article

  • How to check battery usage of an iPhone/Android app?

    - by Gajoo
    I think the title says Enough. For example Unity can generate you a report how much CPU/GPU power it's using or how fast it's going to drain device battery, but what about the applications developed using Cocos2d or the ones you develop directly using OpenGL? How should you profile them? In general what should you profile? or Should I simply run the application and wait for it's battery to run out?

    Read the article

  • How to play many sounds at once in OpenAL

    - by Krom Stern
    I'm developing an RTS game and I would like to add sounds to it. My choice has landed on OpenAL. I have plenty of units which from time to time make sounds: fSound.Play(sfx_shoot, location). Sounds often repeat, e.g. when squad of archers shoots arrows, but they are not synced with each other. My questions are: What is the common design pattern to play multiple sounds in OpenAL, when some of them are duplicate? What are the hardware limitations on sounds count and tricks to overcome them?

    Read the article

  • How to implement the setup rules like Clash of Clan?

    - by user25959
    Now I'm already implement the setup build rules which the building could move by unit grid width and height. But the validation detection is poor efficiency. The algorithm cost me 10~12(ms) in average when I move the building. Here is my approach to that: 1.Basic Grid, it is a two dimensional array. Grid[row][column], so that I can save info for each grid. Like whether is it in occupied or excluded. 2.Exclude Space, this is a space which limit same building numbers in space.

    Read the article

  • SpriteBatch.Begin() making my model not render correctly

    - by manning18
    I was trying to output some debug information using DrawString when I noticed my model suddenly was being rendered like it was inside-out (like the culling had been disabled or something) and the texture maps weren't applied I commented out the DrawString method until I only had SpriteBatch.Begin() and .End() and that was enough to cause the model rendering corruption - when I commented those calls out the model rendered correctly What could this be a symptom of? I've stripped it down to the barest of code to isolate the problem and this is what I noticed. Draw code below (as stripped down as possible) GraphicsDevice.Clear(Color.LightGray); foreach (ModelMesh mesh in TIEAdvanced.Meshes) { foreach (Effect effect in mesh.Effects) { if (effect is BasicEffect) ((BasicEffect)effect).EnableDefaultLighting(); effect.CurrentTechnique.Passes[0].Apply(); } } spriteBatch.Begin(); spriteBatch.DrawString(spriteFont, "Camera Position: " + cameraPosition.ToString(), new Vector2(10, 10), Color.Blue); spriteBatch.End(); GraphicsDevice.DepthStencilState = DepthStencilState.Default; TIEAdvanced.Draw(Matrix.CreateScale(0.025f), viewMatrix, projectionMatrix);

    Read the article

  • How can I teleport seamlessly, without using interpolation?

    - by modchan
    I've been implementing Bukkit plugin for creating toggleable in-game warping areas that will teleport any catched entity to other similar area. I was going to implement concept of non-Euclidean maze using this plugin, but, unfortunately, I've discovered that doing Entity.teleport() causes client to interpolate movement while teleporting, so player slides towards target like Enderman and receives screen updates, so for a split second all underground stuff is visible. While for "just teleport me where I want" usage this is just fine, it ruins whole idea of seamless teleporting, as player can clearly see when transfer happened even without need to look at debug screen. Is there possibility to somehow disable interpolating while teleporting without modifying client, or maybe prevent client from updating screen while it's being teleported?

    Read the article

  • viewing fbx files in windows via xna 4.0

    - by user17753
    I've made some models in Blender and exported them in Autodesk fbx format. I'm trying to view them using XNA 4.0 Refresh. Loading them isn't much an issue, but I'm not familiar enough with XNA 4.0 to, well basically I want to load in the model at say the origin (0,0,0) world coordinates, and then rotate and/or zoom the camera about the world coordinates origin as well so that I can test the model. Typically the mouse, and maybe some arrow keys for zooming/rotating the camera. Anyways, this seems like a simple task and I shouldn't have to re-invent this, isn't there a skeleton code somewhere for this kind of thing for XNA 4.0? I couldn't find a solid example for this on the web. I found a couple that seemed like they might work for xbox, but I'm trying to do this on windows only. Anyways, just looking to be pointed in the right direction on this one, thanks.

    Read the article

  • Weird appearance for a 3D XNA ground

    - by Belos
    I wanted to add a ground so I can know the position of a helicopter in the world. But the ground appeared in a weird way: http://i.stack.imgur.com/yTSuW.jpg The ground had the following texture: http://i.stack.imgur.com/pdpxB.png EDIT: Sorry, I forgot to post the code: public class ImportModel { public Vector3 Position { get; set; } public Vector3 Rotation { get; set; } public Vector3 Scale { get; set; } Model Model; Matrix[] modeltransforms; GraphicsDevice GraphicDevice; ContentManager Content; BoundingSphere sphere; bool boundingimplemented = false; public ImportModel(string model, GraphicsDevice gd, ContentManager cm, Vector3 position, Vector3 rot, Vector3 sca) { GraphicDevice = gd; Content = cm; Position = position; Rotation = rot; Scale = sca; Model = Content.Load<Model>(model); modeltransforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo(modeltransforms); } public void Draw(Camera camera) { Matrix baseworld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach (ModelMesh mesh in Model.Meshes) { Matrix localworld = modeltransforms[mesh.ParentBone.Index] * baseworld; foreach (ModelMeshPart meshpart in mesh.MeshParts) { BasicEffect effect = (BasicEffect)meshpart.Effect; effect.World = localworld; effect.View = camera.View; effect.Projection = camera.Projection; effect.EnableDefaultLighting(); } mesh.Draw(); } } public BoundingSphere BoundingSphere { get { if (!boundingimplemented) { foreach (ModelMesh mesh in Model.Meshes) { BoundingSphere transformed = mesh.BoundingSphere.Transform( modeltransforms[mesh.ParentBone.Index]); sphere = BoundingSphere.CreateMerged(sphere, transformed); } Matrix worldTransform = Matrix.CreateScale(Scale) * Matrix.CreateTranslation(Position); BoundingSphere transforme = sphere; transforme = transforme.Transform(worldTransform); return transforme; } else { Matrix worldTransform = Matrix.CreateScale(Scale) * Matrix.CreateTranslation(Position); BoundingSphere transformed = sphere; transformed = transformed.Transform(worldTransform); return transformed; } } } } Then I call the class from the Game1 class: ImportModel ground = new ImportModel("ground", GraphicsDevice, Content, Vector3.Zero, Vector3.Zero, new Vector3(20f)); EDIT2:This is how the scene looks from top: i.stack.imgur.com/Hs983.jpg

    Read the article

  • Collision with half of a semi-circle

    - by heitortsergent
    I am trying to port a game I made using Flash/AS3, to the Windows Phone using C#/XNA 4.0. You can see it here: http://goo.gl/gzFiE In the flash version I used a pixel-perfect collision between meteors (it's a rectangle, and usually rotated) that spawn outside the screen, and move towards the center, and a shield in the center of the screen(which is half of a semi-circle, also rotated by the player), which made the meteor bounce back in the opposite direction it came from, when they collided. My goal now is to make the meteors bounce in different angles, depending on the position it collides with the shield (much like Pong, hitting the borders causes a change in the ball's angle). So, these are the 3 options I thought of: Pixel-perfect collision (Microsoft has a sample) , but then I wouldn't know how to change the meteor angle after the collision 3 BoundingCircle's to represent the half semi-circle shield, but then I would have to somehow move them as I rotate the shield. Farseer Physics. I could make a shape composed of 3 lines, and use that as the collision object for the shield. Is there any other way besides those? Which would be the best way to do it(it's aimed towards mobile devices, so pixel-perfect is probably not a good choice)? Most of the time there's always a easier/better way than what we think of...

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >