Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 365/1027 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Pathfinding library

    - by Shivan Dragon
    I'm an amateur game developer and somewhat amateur Java developer as well. I'm trying to find a way to have path finding for my game(s). I've first Googled for some existing Java libraries that have various path-finding implementations, but I've failed to find any. It seems to me that the only way to get pathfinding code is to use it via a game engine (like Unity). But I'd just like to have a library that I can use and make the game loop and other stuff on my own. Failing to find such a library I've tried implementing some algorithms myself. I've managed to make a running A* in Java, but for fancier stuff like D* I find it hard to do it by hand. So then, my question is, are there any Java libraries that contain at least some basic pathfinding algorithms implementations?

    Read the article

  • Camera wont stay behind model after pitch, then rotation

    - by ChocoMan
    I have a camera position behind a model. Currently, if I push the left thumbstick making my model move forward, backward, or strafe, the camera stays with the model. If I push the right thumbstick left or right, the model rotates in those directions fine along with the camera rotating while maintaining its position relatively behind the model. But when I pitch the model up or down, then rotate the model afterwards, the camera moves slightly rotates in a clock-like fashion behind the model. If I do a few rotations of the model and try to pitch the camera, the camera will eventually be looking at the side, then eventually the front of the model while also rotating in a clock-like fashion. My question is, how do I keep the camera to pitch up and down behind the model no matter how much the model has rotated? Here is what I got: // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { // Rotates Camera with model Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(angularSpeed); // Pitches Camera around model Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(angularSpeed); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around with model (only seeing back of model) public void cameraYaw(Vector3 axisYaw, float yaw) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; } // Raise camera above or below model's shoulders public void cameraPitch(Vector3 axisPitch, float pitch) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } // Call in update method public void updateCamera() { cameraYaw(Vector3.Up, Yaw); cameraPitch(Vector3.Right, Pitch); } NOTE: I tried to use addPitch just like addRotation but it didn't work...

    Read the article

  • BlitzMax - generating 2D neon glowing line effect to png file

    - by zanlok
    Originally asked on StackOverflow, but it became tumbleweed. I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game. Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve. The problems I've been having are: Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated. Using just 2D calls could be advantageous, simpler to understand and re-use. It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff. p.s. I've reviewed this post (and many many others on the Blitz site), have samples working, and even developed my own 5x5 frag shaders. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well. I'd rather understand how to apply shading to a 2D scene, especially using the specifics of BlitzMax.

    Read the article

  • XNA Easy Storage XBOX 360 High Scores

    - by user1003211
    To followup from a previous query - I need some help with the implementation of easystorage high scores, which is bringing up some errors on the xbox. I get the prompt screen, a savedevice is selected and a file are all created! However the file remains empty, (I've tried prepopulating but still get errors). The full portions of the scoring code can be found here: http://pastebin.com/74v897Yt The current issue in particular is in LoadHighScores() - "There is an error in XML document (0, 0)." under line data = (HighScoreData)serializer.Deserialize(stream); I'm not sure whether this line is correct either: HighScoreData data = new HighScoreData(); public static HighScoreData LoadHighScores(string container, string filename) { HighScoreData data = new HighScoreData(); if (Global.SaveDevice.FileExists(container, filename)) { Global.SaveDevice.Load(container, filename, stream => { File.Open(Global.fileName_options, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream); } finally { // Close the file stream.Close(); // stream.Dispose(); } }); } return (data); } I call: PromptMe(); when the Start button is pressed at the beginning. I call: if (Global.SaveDevice.IsReady){entries = LoadHighScores(HighScoresContainer, HighScoresFilename);} during the menu screen to try and display the highscore screen. I call: SaveHighScore(); when game ends. I've tried altering the struct code to a class but still no luck. Any help greatly appreciated.

    Read the article

  • Blender multiple animations and Collada export

    - by Morgan Bengtsson
    Say I have a simple mesh in Blender, with two keyframes, like so: Then, another animation for the same mesh, also with two keyframes: Theese animations worke fine in Blender and I can switch between them in the Dopesheet, where they are called "actions": The problem arises, when i try to export this to the Collada format, for use in my game engine. The only animation/action that seems to be carried over, is the one currently associated to the mesh. Is it possible to export multiple animations/actions for the same mesh, to the Collada format?

    Read the article

  • How best to handle ID3D11InputLayout in rendering code?

    - by JohnB
    I'm looking for an elegant way to handle input layouts in my directx11 code. The problem I have that I have an Effect class and a Element class. The effect class encapsulates shaders and similar settings, and the Element class contains something that can be drawn (3d model, lanscape etc) My drawing code sets the device shaders etc using the effect specified and then calls the draw function of the Element to draw the actual geometry contained in it. The problem is this - I need to create an D3D11InputLayout somewhere. This really belongs in the Element class as it's no business of the rest of the system how that element chooses to represent it's vertex layout. But in order to create the object the API requires the vertex shader bytecode for the vertex shader that will be used to draw the object. In directx9 it was easy, there was no dependency so my element could contain it's own input layout structures and set them without the effect being involved. But the Element shouldn't really have to know anything about the effect that it's being drawn with, that's just render settings, and the Element is there to provide geometry. So I don't really know where to store and how to select the InputLayout for each draw call. I mean, I've made something work but it seems very ugly. This makes me thing I've either missed something obvious, or else my design of having all the render settings in an Effect, the Geometry in an Element, and a 3rd party that draws it all is just flawed. Just wondering how anyone else handles their input layouts in directx11 in a elegant way?

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • Software rendering 3d triangles in the proper order

    - by at.
    I'm implementing a basic 3d rendering engine in software (for education purposes, please don't mention to use an API). When I project a triangle from 3d to 2d coordinates, I draw the triangle. However, it's in a random order and so whatever gets drawn last draws on top of all other triangles (which might be in front of triangles it shouldn't be in front of)... Intuitively, seems I need to draw the triangles in the correct order. So I can calculate all their distances to the camera and sort by that. The objects furthest away get drawn last. Is this the proper way to render triangles? If I'm sorting all the objects, this is n*log(n) now. Is this the most efficient way to do this?

    Read the article

  • Python rpg adivce? [closed]

    - by nikita.utiu
    I have started coding an text rpg engine in python. I have basic concepts laid down, like game state saving, input, output etc. I was wondering how certain scripted game mechanics(eg. debuffs that increase damage received from a certain player or multiply damage by the number of hits received, overriding of the mobs default paths for certain events etc) are implemented usually implemented. Some code bases or some other source code would be useful(not necessarily python). Thanks in advance.

    Read the article

  • Bodies do not stay sticked together by joint in retina display

    - by Mike JM
    I'm rehearsing on Box2D revolute joints. Everything's going pretty well except for one thing. For some reason bodies joined together with revolute joints do not stay sticked, they start getting apart from each other from the app start when I run it on retina device or simulator. On non retina device it works just fine, as expected. Here's the screenshot of the non-retina version: And here's the behavior when I run the same app on retina device/simulator: I'm taking content scale factor into account.

    Read the article

  • Flash framerate reliability

    - by Tim Cooper
    I am working in Flash and a few things have been brought to my attention. Below is some code I have some questions on: addEventListener(Event.ENTER_FRAME, function(e:Event):void { if (KEY_RIGHT) { // Move character right } // Etc. }); stage.addEventListener(KeyboardEvent.KEY_DOWN, function(e:KeyboardEvent):void { // Report key which is down }); stage.addEventListener(KeyboardEvent.KEY_UP, function(e:KeyboardEvent):void { // Report key which is up }); I have the project configured so that it has a framerate of 60 FPS. The two questions I have on this are: What happens when it is unable to call that function every 1/60 of a second? Is this a way of processing events that need to be limited by time (ex: a ball which needs to travel to the right of the screen from the left in X seconds)? Or should it be done a different way?

    Read the article

  • Setting effects variables in XNA

    - by Badescu Alexandru
    Hello ! I am currently reading a book named "3D Graphics with XNA Game Studio 4.0" by Sean James and have some questions to ask : If i create a effect parameter named lets say SpecularPower and have in my effect a variable named SpecularPower , if i do something like effect.Parameters["SpecularPower"].SetValue(3) That wil change the SpecularPower variable in my effect ? And a second question, not regarding the book : If i have a spaceship and i've created a "boost" functionality that speeds up my spaceship, what effects should i implement to create the impresion oh high speed ? I was thinking of making everything except my spaceship blurry but i think there would be something missing . Any ideas ? Regards, Alex Badescu

    Read the article

  • Routes on a sphere surface - Find geodesic?

    - by CaNNaDaRk
    I'm working with some friends on a browser based game where people can move on a 2D map. It's been almost 7 years and still people play this game so we are thinking of a way to give them something new. Since then the game map was a limited plane and people could move from (0, 0) to (MAX_X, MAX_Y) in quantized X and Y increments (just imagine it as a big chessboard). We believe it's time to give it another dimension so, just a couple of weeks ago, we began to wonder how the game could look with other mappings: Unlimited plane with continous movement: this could be a step forward but still i'm not convinced. Toroidal World (continous or quantized movement): sincerely I worked with torus before but this time I want something more... Spherical world with continous movement: this would be great! What we want Users browsers are given a list of coordinates like (latitude, longitude) for each object on the spherical surface map; browsers must then show this in user's screen rendering them inside a web element (canvas maybe? this is not a problem). When people click on the plane we convert the (mouseX, mouseY) to (lat, lng) and send it to the server which has to compute a route between current user's position to the clicked point. What we have We began writing a Java library with many useful maths to work with Rotation Matrices, Quaternions, Euler Angles, Translations, etc. We put it all together and created a program that generates sphere points, renders them and show them to the user inside a JPanel. We managed to catch clicks and translate them to spherical coords and to provide some other useful features like view rotation, scale, translation etc. What we have now is like a little (very little indeed) engine that simulates client and server interaction. Client side shows points on the screen and catches other interactions, server side renders the view and does other calculus like interpolating the route between current position and clicked point. Where is the problem? Obviously we want to have the shortest path to interpolate between the two route points. We use quaternions to interpolate between two points on the surface of the sphere and this seemed to work fine until i noticed that we weren't getting the shortest path on the sphere surface: We though the problem was that the route is calculated as the sum of two rotations about X and Y axis. So we changed the way we calculate the destination quaternion: We get the third angle (the first is latitude, the second is longitude, the third is the rotation about the vector which points toward our current position) which we called orientation. Now that we have the "orientation" angle we rotate Z axis and then use the result vector as the rotation axis for the destination quaternion (you can see the rotation axis in grey): What we got is the correct route (you can see it lays on a great circle), but we get to this ONLY if the starting route point is at latitude, longitude (0, 0) which means the starting vector is (sphereRadius, 0, 0). With the previous version (image 1) we don't get a good result even when startin point is 0, 0, so i think we're moving towards a solution, but the procedure we follow to get this route is a little "strange" maybe? In the following image you get a view of the problem we get when starting point is not (0, 0), as you can see starting point is not the (sphereRadius, 0, 0) vector, and as you can see the destination point (which is correctly drawn!) is not on the route. The magenta point (the one which lays on the route) is the route's ending point rotated about the center of the sphere of (-startLatitude, 0, -startLongitude). This means that if i calculate a rotation matrix and apply it to every point on the route maybe i'll get the real route, but I start to think that there's a better way to do this. Maybe I should try to get the plane through the center of the sphere and the route points, intersect it with the sphere and get the geodesic? But how? Sorry for being way too verbose and maybe for incorrect English but this thing is blowing my mind! EDIT: This code version is related to the first image: public void setRouteStart(double lat, double lng) { EulerAngles tmp = new EulerAngles ( Math.toRadians(lat), 0, -Math.toRadians(lng)); //set route start Quaternion qtStart.setInertialToObject(tmp); //do other stuff like drawing start point... } public void impostaDestinazione(double lat, double lng) { EulerAngles tmp = new AngoliEulero( Math.toRadians(lat), 0, -Math.toRadians(lng)); qtEnd.setInertialToObject(tmp); //do other stuff like drawing dest point... } public V3D interpolate(double totalTime, double t) { double _t = t/totalTime; Quaternion q = Quaternion.Slerp(qtStart, qtEnd, _t); RotationMatrix.inertialQuatToIObject(q); V3D p = matInt.inertialToObject(V3D.Xaxis.scale(sphereRadius)); //other stuff, like drawing point ... return p; } //mostly taken from a book! public static Quaternion Slerp(Quaternion q0, Quaternion q1, double t) { double cosO = q0.dot(q1); double q1w = q1.w; double q1x = q1.x; double q1y = q1.y; double q1z = q1.z; if (cosO < 0.0f) { q1w = -q1w; q1x = -q1x; q1y = -q1y; q1z = -q1z; cosO = -cosO; } double sinO = Math.sqrt(1.0f - cosO*cosO); double O = Math.atan2(sinO, cosO); double oneOverSinO = 1.0f / senoOmega; k0 = Math.sin((1.0f - t) * O) * oneOverSinO; k1 = Math.sin(t * O) * oneOverSinO; // Interpolate return new Quaternion( k0*q0.w + k1*q1w, k0*q0.x + k1*q1x, k0*q0.y + k1*q1y, k0*q0.z + k1*q1z ); } A little dump of what i get (again check image 1): Route info: Sphere radius and center: 200,000, (0.0, 0.0, 0.0) Route start: lat 0,000 °, lng 0,000 ° @v: (200,000, 0,000, 0,000), |v| = 200,000 Route end: lat 30,000 °, lng 30,000 ° @v: (150,000, 86,603, 100,000), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (1,000, 0,000, -0,000, 0,000); 0,000 °; (1,000, 0,000, 0,000) Qt end: (0,933, 0,067, -0,250, 0,250); 42,181 °; (0,186, -0,695, 0,695) Route start: lat 30,000 °, lng 10,000 ° @v: (170,574, 30,077, 100,000), |v| = 200,000 Route end: lat 80,000 °, lng -50,000 ° @v: (22,324, -26,604, 196,962), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (0,962, 0,023, -0,258, 0,084); 31,586 °; (0,083, -0,947, 0,309) Qt end: (0,694, -0,272, -0,583, -0,324); 92,062 °; (-0,377, -0,809, -0,450)

    Read the article

  • Accept keyboard input when game is not in focus?

    - by Corey Ogburn
    I want to be able to control the game via keyboard while the game does not have focus... How can I do this in XNA? EDIT: I bought a tablet. I want to write a separate app to overly the screen with controls that will send keyboard input to the game. Although, it's not sending the input DIRECT to the game, it's using the method discussed in this SO question: http://stackoverflow.com/questions/6446085/emulate-held-down-key-on-keyboard To my understanding, my test app is working the way it should be but the game is not responding to this input. I originally thought that Keyboard.GetState() would get the state regardless that the game is not in focus, but that doesn't appear to be the case.

    Read the article

  • XNA 4.0: Problem with loading XML content files

    - by 200
    I'm new to XNA so hopefully my question isn't too silly. I'm trying to load content data from XML. My XML file is placed in my Content project (the new XNA 4 thing), called "Event1.xml": <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="MyNamespace.Event"> // error on this line <name>1</name> </Asset> </XnaContent> My "Event" class is placed in my main project: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; namespace MyNamespace { public class Event { public string name; } } The XML file is being called by my main game class inside the LoadContent() method: Event temp = Content.Load<Event>("Event1"); And this is the error I'm getting: There was an error while deserializing intermediate XML. Cannot find type "MyNamespace.Event" I think the problem is because at the time the XML is being evaluated, the Event class has not been established because one file is in my main project while the other is in my Content project (being a XNA 4.0 project and such). I have also tried changing the build action of the xml file from compile to content; however the program would then not be able to find the file and would give this other warning: Project item 'Event1.xml' was not built with the XNA Framework Content Pipeline. Set its Build Action property to Compile to build it. Any suggestions are appreciated. Thanks.

    Read the article

  • Libgdx Box2d createfixture crashes vm intermittently

    - by user45021
    I have a hard to debug problem. I have a Box2D game which creates a wheeled vehicle. I want the vehicle body to reflect when it goes from moving left to moving right. to do this i set a flag in a changelistener on a button and then in update method i destroy and recreate the body facing the other way. it works fine most of the time but if i flip the vehicle several times quickly JVM crashes. no errors nothing in log. i added system.out.prints and the errors occur in the routine that instantiates the new body and before anything gets deleted/removed so i don't think the UI is trying to access null pointers. and if it was it should throw an error. M the crash seems to be at createFixture statements. but the work most of the first time. I tried debugging but the error doesn't happen much when the flips are slow. in any case createFixture drops fairly quickly into jni. Is this a Box2D bug? Is GC the issue? From Mission Control I see the GC is collecting on a period of ooh maybe 5s and flipping slower than that mostly works. how do i debug this? i am win7 64bit with 64bit at and jdk7 64bit. libgdx-0.9.9 and sometimes libgdx-nightly-20140215.

    Read the article

  • How can I compile SM 3.0 effects in D3D11 in slimdx?

    - by jacker
    var bytecode = ShaderBytecode.CompileFromFile("shaders\\testShader.fx", "fx_5_0", ShaderFlags.None, SlimDX.D3DCompiler.EffectFlags.None, null, null, out str); var effect = new SlimDX.Direct3D11.Effect(gpu.Device, bytecode); Works fine but if I try to use another shader model like 4.0 or 3.0 it throws an error on the new effect creation: E_FAIL: An undetermined error occurred (-2147467259) How do I compile older shaders? And I've read about device context but I can't find any information on how to use them to maintain DX9 compatibility.

    Read the article

  • Oscillating Sprite Movement in XNA

    - by Nick Van Hoogenstyn
    I'm working on a 2d game and am looking to make a sprite move horizontally across the screen in XNA while oscillating vertically (basically I want the movement to look like a sin wave). Currently for movement I'm using two vectors, one for speed and one for direction. My update function for sprites just contains this: Position += direction * speed * (float)t.ElapsedGameTime.TotalSeconds; How could I utilize this setup to create the desired movement? I'm assuming I'd call Math.Sin or Math.Cos, but I'm unsure of where to start to make this sort of thing happened. My attempt looked like this: public override void Update(GameTime t) { double msElapsed = t.TotalGameTime.Milliseconds; mDirection.Y = (float)Math.Sin(msElapsed); if (mDirection.Y >= 0) mSpeed.Y = moveSpeed; else mSpeed.Y = -moveSpeed; base.Update(t, mSpeed, mDirection); } moveSpeed is just some constant positive integer. With this, the sprite simply just continuously moves downward until it's off screen. Can anyone give me some info on what I'm doing wrong here? I've never tried something like this so if I'm doing things completely wrong, let me know!

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • Optimizing graphics for an iOS flash game

    - by 1GR3
    A friend of mine and myself are working on a flash developed iOS (and later Android) puzzle board game. He's a developer and I'm a designer/developer so (no surprise) we have different points of view. His method: make small tiles (100x100px) in Photoshop join them into the board and then in flash apply effects to the board to avoid repetition (80's not in the good way). My method: precompose the whole board (960x640px+bleed) in Photoshop and than mask active and inactive areas in flash. What do you think?

    Read the article

  • How do I render my own DirectX Stuff to a full screen WPF's DirectX surface?

    - by marc40000
    Basically Danny Varod seems to know as he posted it as an answer to this question: Display a Message Box over a Full Screen DirectX application I think, theoretically this might work, but I have no idea how to actually do it. Since I'm also not allowed to post a comment under his comment nor am I allwoed to ask on meta about how to contact another user, I ask this as a normal question here: How do I render my own DirectX Stuff to a full screen WPF's DirectX surface? For starters, I have no idea how to get the DirectX surface from a WPF window. If I had it, what do I have to take care of that the WPF rendering doesn't screw up my own rending or vice-versa?

    Read the article

  • How to copy depth buffer to CPU memory in DirectX?

    - by Ashwin
    I have code in OpenGL that uses glReadPixels to copy the depth buffer to a CPU memory buffer: glReadPixels(0, 0, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, dbuf); How do I achieve the same in DirectX? I have looked at a similar question which gives the solution to copy the RGB buffer. I've tried to write similar code to copy the depth buffer: IDirect3DSurface9* d3dSurface; d3dDevice->GetDepthStencilSurface(&d3dSurface); D3DSURFACE_DESC d3dSurfaceDesc; d3dSurface->GetDesc(&d3dSurfaceDesc); IDirect3DSurface9* d3dOffSurface; d3dDevice->CreateOffscreenPlainSurface( d3dSurfaceDesc.Width, d3dSurfaceDesc.Height, D3DFMT_D32F_LOCKABLE, D3DPOOL_SCRATCH, &d3dOffSurface, NULL); // FAILS: D3DERR_INVALIDCALL D3DXLoadSurfaceFromSurface( d3dOffSurface, NULL, NULL, d3dSurface, NULL, NULL, D3DX_FILTER_NONE, 0); // Copy from offscreen surface to CPU memory ... The code fails on the call to D3DXLoadSurfaceFromSurface. It returns the error value D3DERR_INVALIDCALL. What is wrong with my code?

    Read the article

  • How can I estimate cost of creating tile-set similar to HoM&M 2?

    - by Alexey Petrushin
    How to estimate cost of creating tile-set similar to HoM&M 2? I'm mostly interested in the tile-set graphics only, no animation needed, the big images of town and creatures can be done as quick and dirty pensil sketches. The quality of tiles and its amount should be roughly the same as in HoM&M 2. Can You please give a rough estimate how much it will take man-hours and how much will it cost?

    Read the article

  • Network Multiplayer in Flash

    - by shadowprotocol
    Flash has come a long way in the last decade, and it's a well-kept secret getting a flash game to connect to a multi-client server for chat and/or basic avatar movement in real time. Why has the industry as a whole not made this a common-knowledge type of thing yet? We keep pushing to the web but I am finding it incredibly difficult gathering learning material on this subject. Sure, I can find multi-client server socket tutorials in various languages (using select statements and/or threads to handle multiple socket connections), but in regards to Flash applications inside of a browser? NOPE! Can everyone please share what they know? :] It's a subject I'd really love to get into but I'm afraid I just honestly don't know enough about how to do it. Thanks!

    Read the article

  • What is a fast way to darken the vertices I'm rendering?

    - by Luis Cruz
    To make a lighting system for a voxel game, I need to specify a darkness value per vertex. I'm using GL_COLOR_MATERIAL and specifying a color per vertex, like this: glEnable(GL_COLOR_MATERIAL); glBegin(GL_QUADS); glColor3f(0.6f, 0.6f, 0.6f); glTexCoord2f(...); glVertex3f(...); glColor3f(0.3f, 0.3f, 0.3f); glTexCoord2f(...); glVertex3f(...); glColor3f(0.7f, 0.7f, 0.7f); glTexCoord2f(...); glVertex3f(...); glColor3f(0.9f, 0.9f, 0.9f); glTexCoord2f(...); glVertex3f(...); glEnd(); This is working, but with many quads it is very slow.. I'm using display lists too. Any good ideas in how to make vertices darker?

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >