Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 491/1027 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • How can I plot a radius of all reachable points with pathfinding for a Mob?

    - by PugWrath
    I am designing a tactical turn based game. The maps are 2d, but do have varying level-layers and blocking objects/terrain. I'm looking for an algorithm for pathfinding which will allow me to show an opaque shape representing all of the possible max-distance pixels that a mob can move to, knowing the mob's max pixel distance. Any thoughts on this, or do I just need to write a good pathfinding algorithm and use it to find the cutoff points for any direction in which an obstacle exists?

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • How to acheive a smooth 2D lighting effect?

    - by Cyral
    I'm making a tile based game in XNA So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Switching between Discrete and Integrated GPUs

    - by void-pointer
    Hello everyone, I develop CUDA applications on my Alienware M17x portable back-breaker, which has two discrete GTX 285M GPUs and one integrated GeForce 9400M GPU. I can currently switch between them using NVIDIA's software, but I would like the ability to do so within my applications for purposes of benchmarking and general convenience. Apparently this requires the "NDA version" of NVIDIA's Driver API, which I know not how to obtain. Would using this API be the only way to accomplish what I seek, and if so, how would I obtain it? A solution using Windows APIs would also be acceptable, though less preferable to one which would leverage a cross-platform API. I have created a similar thread concerning the matter on NVIDIA's forum, which is down at the time of this writing. Thanks for reading my question; it is much appreciated!

    Read the article

  • Using box2d DrawDebugData with multi layer scene ?

    - by Mr.Gando
    In my Game, a Scene is composed by several layers. Each layer has different camera transformations. This way I can have a layer at z=3 (GUI), z=2 (Monsters), z=1 (scrolling background), and this 3 layers compose my whole Scene. My render loop looks something like: renderLayer() applyTransformations() renderVisibleEntities() renderChildLayers() end If I call DrawDebugData() in the render loop, the whole b2world debug data will be rendered once for each layer in my scene, this generates a mess, because the "debug boxes" get duplicated, some of them get the camera transformations applied and some of them don't, etc. What I would like to do, would be to make DrawDebugData to draw only certain debug boxes. In that way, I could call something like b2world->DrawDebugDataForLayer(int layer_id) and call that on each layer like : renderLayer() applyTransformations() renderVisibleEntities() //Only render my corresponding layer debug data b2world->DrawDebugDataForLayer(layer_id) renderChildLayers() end Is there a way to subclass b2World so I could add this functionality ( specific to my game ) ? If not, what would be the best way to achieve this (Cocos2d uses a similar scene graph approach and box2d, but I'm not sure if debugDraw works in Cocos2d... ) Thanks

    Read the article

  • Understanding Unity3d physics: where is the force applied?

    - by Heisenbug
    I'm trying to understand which is the right way to apply forces to a RigidBody. I noticed that there are AddForce and AddRelativeForce methods, one applied in world space coordinate system meanwhile the other in the local space. The thing that I do not understand is the following: usually in physics library (es. Bullet) we can specify the force vector and also the force application point. How can I do this in Unity? Is it possible to apply a force vector in a specific point relative to the given RigidBody coordinate system? Where does AddForce apply the force?

    Read the article

  • Algorithm to shoot at a target in a 3d game

    - by Sebastian Bugiu
    For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); ENG_Math.solveQuadraticEquation(a, b, c, collisionTime); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? sqr() is square not sqrt - avoiding a confusion.

    Read the article

  • Depth buffer values reset on change shader?

    - by bobobobo
    I have 2 different shaders, and when I change the shader (glUseProgram), it seems that the depth information is lost, because everything drawn with the 2nd shader appears completely on top of anything drawn by the first shader. If I switch the order of shader use/drawing, then it's the same (the last drawn object always appears on top of the first drawn object if there is a shader change between the 2 objects, even if the last drawn object is further away)

    Read the article

  • exact point on a rotating sphere

    - by nkint
    I have a sphere that represents the Earth textured with real pictures. It's rotating around the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • Disappearring instances of VertexPositionColor using MonoGame

    - by Rosko
    I am a complete beginner in graphics developing with XNA/Monogame. Started my own project using Monogame 3.0 for WinRT. I have this unexplainable issue that some of the vertices disappear while doing some updates on them. Basically, it is a game with balls who collide with the walls and with each other and in certain conditions they explode. When they explode they disappear. Here is a video demonstrating the issue. I used wireframes so that it is easier to see how vertices are missing. The perfect exploding balls are the ones which are result by user input with mouse clicking. Thanks for the help. The situations is: I draw user primitives with triangle strips using like this graphicsDevice.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleStrip, circleVertices, 0, primitiveCount); All of the primitives are in the z-plane (z = 0), I thought that it is the culling in action. I tried setting the culling mode to none but it did not help. Here is the code responsible for the explosion private void Explode(GameTime gameTime, ref List<Circle> circles) { if (this.isExploding) { for (int i = 0; i < this.circleVertices.Length; i++) { if (this.circleVertices[i] != this.circleCenter) { if (Vector3.Distance(this.circleVertices[i].Position, this.circleCenter.Position) < this.explosionRadius * precisionCoefficient) { var explosionVector = this.circleVertices[i].Position - this.circleCenter.Position; explosionVector.Normalize(); explosionVector *= explosionSpeed; circleVertices[i].Position += explosionVector * (float)gameTime.ElapsedGameTime.TotalSeconds; } else { circles.Remove(this); } } } } } I'd be really greatful if anyone has suggestions about how to fix this issue.

    Read the article

  • Is there a cross-platform special directory I can use for game save files?

    - by Suds
    I'm developing with LWJGL and Java on a Windows 7 laptop. I've successfully set up saving to the %appdata%\gamename\saves\ or long form c:\users\user\appdata\roaming\gamename\saves\ folder by using File dir = new File(System.getenv("APPDATA") + "\\gamename\\saves\\");. I have hobbyist level experience with Linux, and zero experience with OSX. My game will be fully cross platform. Is System.getenv("APPDATA"); cross platform? If so, where does it point to on Linux or OSX? Is there a best practices alternative that I should use?

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • How to end game properly in unity? [duplicate]

    - by user3889649
    This question already has an answer here: Why do I seem to lose control of my computer when full screen Unity game loses focus? 1 answer I have made a game in unity free. The game is functioning properly but if your computer receives any kind of notification, the game minimizes automatically and stops to work completely. Along with that, my computer freezes completely and I need to restart each time. Is there any solution to this problem in unity ?

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • Implementing movement on a grid

    - by Dvole
    I have a simple snake game, where I have other NPC snakes on the field. How do I calculate the movement of those other snakes so that they did not hit walls, and each other? So far I have it like this: I check for current coordinates and when there is a wall nearby I change direction to some other one. And so on, this way the snakes never collide the walls. But not actually colliding other snakes, how do I prevent this? I figured I could probe for the direction I'm heading and if there is anything there I would change direction too, but there is a set of situation where this won't work, for example if another snake will block off all exits later.

    Read the article

  • Zooming in isometric engine using XNA

    - by Yheeky
    I´m currently working on an isometric game engine and right now I´m looking for help concerning my zoom function. On my tilemap there are several objects, some of them are selectable. When a house (texture size 128 x 256) is placed on the map I create an array containing all pixels (= 32768 pixels). Therefore each pixel has an alpha value I check if the value is bigger than 200 so it seems to be a pixel which belongs to the building. So if the mouse cursor is on this pixel the building will be selected - PixelCollision. Now I´ve already implemented my zooming function which works quite well. I use a scale variable which will change my calculation on drawing all map items. What I´m looking for right now is a precise way to find out if a zoomed out/in house is selected. My formula works for values like 0,5 (zoomed out) or 2 (zoomed in) but not for in between. Here is the code I use for the pixel index: var pixelIndex = (int)(((yPos / (Scale * Scale)) * width) + (xPos / Scale) + 1); Example: Let´s assume my mouse is over pixel coordinate 38/222 on the original house texture. Using the code above we get the following pixel index. var pixelIndex = ((222 / (1 * 1)) * 128) + (38 / 1) + 1; = (222 * 128) + 39 = 28416 + 39 = 28455 If we now zoom out to scale 0,5, the texture size will change to 64 x 128 and the amount of pixels will decrease from 32768 to 8192. Of course also our mouse point changes by the scale to 19/111. The formula makes it easy to calculate the original pixelIndex using our new coordinates: var pixelIndex = ((111 / (0.5 * 0.5)) * 64) + (19 / 0.5) + 1; = (444 * 64) + 39 = 28416 + 39 = 28455 But now comes the problem. If I zoom out just to scale 0.75 it does not work any more. The pixel amount changes from 32768 to 18432 pixels since texture size is 96 x 192. Mouse point is transformed to point 28/166. The formula gives me a wrong pixelIndex. var pixelIndex = ((166 / (0.75 * 0.75)) * 96) + (28 / 0.75) + 1; = (295.11 * 96) + 38.33 = 28330.66 + 38.33 = 28369 Does anyone have a clue what´s wrong in my code? Must be the first part (28330.66) which causes the calculation problem. Thanks! Yheeky

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • Handling player/background movements in 2D games

    - by lukeluke
    Suppose you have your animated character controlled by the player and a 2D world (like the old 2D side-scrolling games). When the user press right on the keyboard, the background is moved to the right. If the path is always horizontal, this is simple to do (incrementation/decrementation of the x-coordinate). But suppose that the path is instead a polygonal chain. My questions are: How do you move the background? How do you move the background if the game objects are managed with a physics engine like box2D?

    Read the article

  • Cutting out smaller rectangles from a larger rectangle

    - by Mauro Destro
    The world is initially a rectangle. The player can move on the world border and then "cut" the world via orthogonal paths (not oblique). When the player reaches the border again I have a list of path segments they just made. I'm trying to calculate and compare the two areas created by the path cut and select the smaller one to remove it from world. After the first iteration, the world is no longer a rectangle and player must move on border of this new shape. How can I do this? Is it possible to have a non rectangular path? How can I move the player character only on path? EDIT Here you see an example of what I'm trying to achieve: Initial screen layout. Character moves inside the world and than reaches the border again. Segment of the border present in the smaller area is deleted and last path becomes part of the world border. Character moves again inside the world. Segments of border present in the smaller area are deleted etc.

    Read the article

  • How to load a text file from a server into iPhone game with AS3 in Adobe AIR?

    - by Phil
    Im creating an iPhone game with Adobe AIR, and I want to be able to load a simple text msg into an dynamic text box on the games front screen from my server (and then be able to update that text file on the server, so it updates automatically in the game after the game is on the app store) How would I go about acheiving that? is it as simple as using a getURL? are there any specifical issues with trying to do this on the iPhone via AIR that I should be aware of? Thanks for any advice.

    Read the article

  • XNA 2D Board game - trouble with the cursor

    - by Adorjan
    I just have started making a simple 2D board game using XNA, but I got stuck at the movement of the cursor. This is my problem: I have a 10x10 table on with I should use a cursor to navigate. I simply made that table with the spriteBatch.Draw() function because I couldn't do it on another way. So here is what I did with the cursor: public override void LoadContent() { ... mutato.Position = new Vector2(X, Y); //X=103, Y=107; mutato.Sebesseg = 45; ... mutato.Initialize(content.Load<Texture2D>("cursor"),mutato.Position,mutato.Sebesseg); ... } public override void HandleInput(InputState input) { if (input == null) throw new ArgumentNullException("input"); // Look up inputs for the active player profile. int playerIndex = (int)ControllingPlayer.Value; KeyboardState keyboardState = input.CurrentKeyboardStates[playerIndex]; if (input.IsPauseGame(ControllingPlayer) || gamePadDisconnected) { ScreenManager.AddScreen(new PauseMenuScreen(), ControllingPlayer); } else { // Otherwise move the player position. if (keyboardState.IsKeyDown(Keys.Down)) { Y = (int)mutato.Position.Y + mutato.Move; } if (keyboardState.IsKeyDown(Keys.Up)) { Y = (int)mutato.Position.Y - mutato.Move; } if (keyboardState.IsKeyDown(Keys.Left)) { X = (int)mutato.Position.X - mutato.Move; } if (keyboardState.IsKeyDown(Keys.Right)) { X = (int)mutato.Position.X + mutato.Move; } } } public override void Draw(GameTime gameTime) { mutato.Draw(spriteBatch); } Here's the cursor's (mutato) class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace Battleship.Components { class Cursor { public Texture2D Cursortexture; public Vector2 Position; public int Move; public void Initialize(Texture2D texture, Vector2 position,int move) { Cursortexture = texture; Position = position; Move = move; } public void Update() { } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(Cursortexture, Position, Color.White); } } } And here is a part of the InputState class where I think I should change something: public bool IsNewKeyPress(Keys key, PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { if (controllingPlayer.HasValue) { // Read input from the specified player. playerIndex = controllingPlayer.Value; int i = (int)playerIndex; return (CurrentKeyboardStates[i].IsKeyDown(key) && LastKeyboardStates[i].IsKeyUp(key)); } } If I leave the movement operation like this it doesn't have any sense: X = (int)mutato.Position.X - mutato.Move; However if I modify it to this: X = (int)mutato.Position.X--; it moves smoothly. Instead of this I need to move the cursor by fields (45 pixels), but I don't have any idea how to manage it.

    Read the article

  • OpenGL ES 2.0 gluUnProject

    - by secheung
    I've spent more time than I should trying to get my ray picking program working. I'm pretty convinced my math is solid with respect to line plane intersection, but I believe the problem lies with the changing of the mouse screen touch into 3D world space. Heres my code: public void passTouchEvents(MotionEvent e){ int[] viewport = {0,0,viewportWidth,viewportHeight}; float x = e.getX(), y = viewportHeight - e.getY(); float[] pos1 = new float[4]; float[] pos2 = new float[4]; GLU.gluUnProject( x, y, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos1, 0); GLU.gluUnProject( x, y, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos2, 0); } Just as a reference I've tried transforming the coordinates 0,0,0 and got an offset. It would be appreciated if you would answer using OpenGL ES 2.0 code.

    Read the article

  • Odds For Fighting Game

    - by thinkfuture
    I'm creating a fighting game where two opponents face off against each other in the ring. While I've been able to figure out the odds of a player winning based on previous wins/losses, I have yet to find a formula which modifies those odds based on opponent. For example: Player 1: W:5 L:5 - 1/1 odds Player 2: W:5 L:0 - 1/5 odds I want to calculate the odds that Player 1 will wins against player 2. Compounding this the players could be of different levels: if the players are within a few levels of each other, the odds should map closely to wins/losses. However, as the levels diverge, the odds of the lower level player winning reduce. As a swag: Player 1 - W:5 L:5 - 1:1 odds Against a level 8 - 1:2 Against a level 9 - 2:3 Against a level 10 - 1:1 Against a level 11 - 3:2 Against a level 12 - 2:1 These are just estimates, my sense is that there is a math formula out there which will calculate that - can anyone out there point me to what this could be? Thanks...Chris

    Read the article

  • Creating a 2D Line Branch (Part 2)

    - by Danran
    Yesterday i asked this question on how to create a 2D line branch; Creating a 2D Line Branch And thanks to the answered provided, i now have this nice looking main branch; *coloured to show the different segments in the final item. Now is the time now to branch things off as discussed in the article; http://drilian.com/2009/02/25/lightning-bolts/ Again however i am confused as to the meaning of the following pseudo code; splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; I'm unsure how to actually rotate this correctly. In all honesty i'm abit unsure what to-do completely at this part, "splitEnd" will be a Vector3, so whatever happens in the rotate function must then return some form of directional rotation which is then * by a scale to create length and then added to the midPoint. I'm not sure. If someone could explain what i'm meant to be doing in this part that would be really grateful.

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >