Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 484/1529 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Game-oriented programming language features/objectives/paradigm?

    - by Klaim
    What are the features and language objectives (general problems to solves) or paradigms that a fictive programming language targetted at games (any kind of game) would require? For example, obviously we would have at least Performance (in speed and memory) (because a lot of games simply require that), but it have a price in the languages we currently use. Expressivity might be a common feature that is required for all languages. I guess some concepts from not-usually-used-for-games paradigms, like actor-based languages, or language-based message passing, might be useful too. So I ask you what would be ideal for games. (maybe one day someone will take those answers and build a language over it? :D ) Please set 1 feature/objective/paradigm per answer. Note: maybe that question don't make sense to you. In this case please explain why in an answer. It's a good thing to have answers to this question that might pop in your head sometimes.

    Read the article

  • How do you stop OgreBullet Capsule from falling over?

    - by Nathan Baggs
    I've just started implementing bullet into my Ogre project. I followed the install instructions here: http://www.ogre3d.org/tikiwiki/OgreBullet+Tutorial+1 And the rest if the tutorial here: http://www.ogre3d.org/tikiwiki/OgreBullet+Tutorial+2 I got that to work fine however now I wanted to extend it to a handle a first person camera. I created a CapsuleShape and a Rigid Body (like the tutorial did for the boxes) however when I run the game the capsule falls over and rolls around on the floor, causing the camera swing wildly around. I need a way to fix the capsule to always stay upright, but I have no idea how Below is the code I'm using. (part of) Header File OgreBulletDynamics::DynamicsWorld *mWorld; // OgreBullet World OgreBulletCollisions::DebugDrawer *debugDrawer; std::deque<OgreBulletDynamics::RigidBody *> mBodies; std::deque<OgreBulletCollisions::CollisionShape *> mShapes; OgreBulletCollisions::CollisionShape *character; OgreBulletDynamics::RigidBody *characterBody; Ogre::SceneNode *charNode; Ogre::Camera* mCamera; Ogre::SceneManager* mSceneMgr; Ogre::RenderWindow* mWindow; main file bool MinimalOgre::go(void) { ... mCamera = mSceneMgr->createCamera("PlayerCam"); mCamera->setPosition(Vector3(0,0,0)); mCamera->lookAt(Vector3(0,0,300)); mCamera->setNearClipDistance(5); mCameraMan = new OgreBites::SdkCameraMan(mCamera); OgreBulletCollisions::CollisionShape *Shape; Shape = new OgreBulletCollisions::StaticPlaneCollisionShape(Vector3(0,1,0), 0); // (normal vector, distance) OgreBulletDynamics::RigidBody *defaultPlaneBody = new OgreBulletDynamics::RigidBody( "BasePlane", mWorld); defaultPlaneBody->setStaticShape(Shape, 0.1, 0.8); // (shape, restitution, friction) // push the created objects to the deques mShapes.push_back(Shape); mBodies.push_back(defaultPlaneBody); character = new OgreBulletCollisions::CapsuleCollisionShape(1.0f, 1.0f, Vector3(0, 1, 0)); charNode = mSceneMgr->getRootSceneNode()->createChildSceneNode(); charNode->attachObject(mCamera); charNode->setPosition(mCamera->getPosition()); characterBody = new OgreBulletDynamics::RigidBody("character", mWorld); characterBody->setShape( charNode, character, 0.0f, // dynamic body restitution 10.0f, // dynamic body friction 10.0f, // dynamic bodymass Vector3(0,0,0), Quaternion(0, 0, 1, 0)); mShapes.push_back(character); mBodies.push_back(characterBody); ... }

    Read the article

  • How are events in games handled?

    - by Alex
    In may games that I have played, I have seen events being triggered, such as when you walk into a certain land area while holding a specific object, it will trigger a special creature to spawn. I was wondering, how do games deal with events such as this? Not in a specific game, but in general among games. The first thought I had was that each place has a hard-coded set of events that it will call when something happens there. However, that would be too inefficient to maintain, as when something new is added, that would require modification of every part of the game that would potentially cause the event to be called. Next up, I had the idea of maybe how GUI programming works. In all of the GUI programming I've done, you create a component and a callback function, or a listener. Then, when the user interacts when the button, the callback function is called, allowing you to do something with it. So, I was thinking that in terms of a game, when a land area gets loaded the game loops over a list of all events, creating instances of them and calling public methods to bind them to the current scene. The events themselves then handle what scene it is, and if it is a scene that pertains to the event, will call the public method of the scene to bind the event to an action. Then, when the action takes place, the scene would call all events that are bound to that action. However, I'm sure that's not how games would operate either, as that would require a lot of creating of events all the time. So how to video games handle events, are either of those methods correct, or is it something completely different?

    Read the article

  • Box2d - Attaching a fired arrow to a moving enemy

    - by Satchmo Brown
    I am firing an arrow from the player to moving enemies. When the arrow hits the enemy, I want it to attach exactly where it hit and cause the enemy (a square) to tumble to the ground. Excluding the logistics of the movement and the spin (it already works), I am stuck on the attaching of the two bodies. I tried to weld them together initially but when they fell, they rotated in opposite directions. I have figured that a revolute joint is probably what I am after. The problem is that I can't figure out a way to attach them right where they collide. Using code from iforce2d: b2RevoluteJointDef revoluteJointDef; revoluteJointDef.bodyA = m_body; revoluteJointDef.bodyB = m_e->m_body; revoluteJointDef.collideConnected = true; revoluteJointDef.localAnchorA.Set(0,0);//the top right corner of the box revoluteJointDef.localAnchorB.Set(0,0);//center of the circle b2RevoluteJoint m_joint = *(b2RevoluteJoint*)m_game->m_world->CreateJoint( &revoluteJointDef ); m_body->SetLinearVelocity(m_e->m_body->GetLinearVelocity()); This attaches them but in the center of both of their points. Does anyone know how I would go about getting the exact point of collision so I can link these? Is this even the right method of doing this? Update: I have the exact point of collision. But I still am not sure this is even the method I want to go about this. Really, I just want to attach body A to B and have body B unaffected in any way.

    Read the article

  • MeshBuilder, assembly missing

    - by BlackBear
    I'm trying to build a terrain starting from a heightmap. I've already some ideas about the procedure, but I can't even get started. I feel like I have to use a MeshBuider. The problem is that Visual Studio (I'm using the 2008 version) wants an assembly. Effectively on the MSDN there's a line specifying the assembly needed by the MeshBuilder, but I don't know how to import/load it. Any suggestions? Thanks in advance :)

    Read the article

  • Redering performance in FlasCC + UDK when compared to Stage3d and UDK on Windows?

    - by Arthur Wulf White
    http://gaming.adobe.com/technologies/flascc/ Developers can now access UDK for browser applications. Does this mean greater performance than using a Stage3D engine (Away3D 4) and how much of a noticeable difference in performance would it make in rendering speeds? Is there any benchmark you could propose that would allow to compare them fairly? I am asking this to help myself understand the consequences in performance for deciding to use UDK in a browser based game. I would also like to know how it compares with UDK running natively in Windows? I am not asking which technology to use or which is better. Only interested in the optimizing rendering speed in a 3d browser game with flash.

    Read the article

  • Scaling and new coordinates based off screen resolution

    - by Atticus
    I'm trying to deal with different resolutions on Android devices. Say I have a sprite at X,Y and dimensions W,H. I understand how to properly scale the width and heigh dimensions depending on the screen size, but I am not sure how to properly reposition this sprite so that it exists in the same area that it should. If I simply resize the width and heigh and keep X,Y at the same coordinates, things don't scale well. How do I properly reposition? Multiply the coordinates by the scale as well?

    Read the article

  • help animating a player in Corona SDK

    - by andrew McCutchan
    Working on a game in the Corona SDK with Lua and I need help so the player animates on the line drawn. Right now he animates at the beggining and the end but not on the line. Here is the player code: function spawnPlayerObject(xPlayerSpawn,yPlayerSpawn,richTurn) --spawns Rich where we need him. -- riches sprite sheet and all his inital spirites. We need to adjust this so that animations 3-6 are the only ones that animate when moving horizontally local richPlayer = sprite.newSpriteSet(richSheet1,1,6) sprite.add(richPlayer, "rich", 1,6,500,1) rich = sprite.newSprite(richPlayer) rich.x = xPlayerSpawn rich.y = yPlayerSpawn rich:scale(_W*0.0009, _W*0.0009) -- scale is used to adjust the size of the object. richDir = richTurn rich.rotation = richDir rich:prepare("rich") rich:play() physics.addBody( rich, "static", { friction=1, radius = 15 } ) -- needs a better physics body for collision detection. end And here is the code for the line: function movePlayerOnLine(event) --for the original image replace all rich with player. if posCount ~= linePart then richDir = math.atan2(ractgangle_hit[posCount].y-rich.y,ractgangle_hit[posCount].x-rich.x)*180/math.pi playerSpeed = 5 rich.x = rich.x + math.cos(richDir*math.pi/180)*playerSpeed rich.y = rich.y + math.sin(richDir*math.pi/180)*playerSpeed if rich.x < ractgangle_hit[posCount].x+playerSpeed*10 and rich.x > ractgangle_hit[posCount].x-playerSpeed*10 then if rich.y < ractgangle_hit[posCount].y+playerSpeed*10 and rich.y > ractgangle_hit[posCount].y-playerSpeed*10 then posCount = posCount + 1 end end I don't think anything has changed recently but I have been getting an error when testing of "attempt to upvalue "rich" a nil value" on the second line, richDir = etc.

    Read the article

  • A Quick HLSL Question (How to modify some HLSL code)

    - by electroflame
    Thanks for wanting to help! I'm trying to create a circular, repeating ring (that moves outward) on a texture. I've achieved this, to a degree, with the following code: float distance = length(inTex - in_ShipCenter); float time = in_Time; ///* Simple distance/time combination */ float2 colorIndex = float2(distance - time, .3); float4 shipColor = tex2D(BaseTexture, inTex); float4 ringColor = tex2D(ringTexture, colorIndex); float4 finalColor; finalColor.rgb = (shipColor.rgb) + (ringColor.rgb); finalColor.a = shipColor.a; // Use the base texture's alpha (transparency). return finalColor; This works, and works how I want it to. The ring moves outward from the center of the texture at a steady rate, and is constrained to the edges of the base texture (i.e. it won't continue past an edge). However, there are a few issues with it that I would like some help on, though. They are: By combining the color additively (when I set finalColor.rgb), it makes the resulting ring color much lighter than I want (which, is pretty much the definition of additive blending, but I don't really want additive blending in this case). I would really like to be able to pass in the color that I want the ring to be. Currently, I have to pass in a texture that contains the color of the ring, but I think that doing it that way is kind of wasteful and overly-cumbersome. I know that I'm probably being an idiot over this, so I greatly appreciate the help. Some other (possibly relevant) information: I'm using XNA. I'm applying this by providing it to a SpriteBatch (as an Effect). The SpriteBatch is using BlendState.NonPremultiplied. Thanks in advance! EDIT: Thanks for the answers thus far, as they've helped me get a better grasp of the color issue. However, I'm still unsure of how to pass a color in and not use a texture. i.e. Can I create a tex2D by using a float4 instead of a texture? Or can I make a texture from a float4 and pass the texture in to the tex2D? DOUBLE EDIT: Here's some example pictures: With the effect off: With the effect on: With the effect on, but with the color weighting set to full: As you can see, the color weighting makes the base texture completely black (The background is black, so it looks transparent). You can also see the red it's supposed to be, and then the white-ish it really is when blended additively.

    Read the article

  • Does XNA 4 support 3D affine transformations for 2D images?

    - by Paul Baker Salt Shaker
    Looooong story short I'm essentially trying to code Mode 7 in XNA. Before I continue bashing my brains out in research and various failed matrix math equations; I just want to make sure that XNA supports this just out-of-the-box (so to speak). I'd prefer not to have to import other libraries, because I want to learn how it works myself that way I understand the whole thing better. However that's all for naught if it won't work at all. So no opengl, directx, etc if possible (will eventually do it just to optimize everything, but not for now). tl;dr: Can I has Mode 7 in XNA?

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • Ledge grab and climb in Unity3D

    - by BallzOfSteel
    I just started on a new project. In this project one of the main gameplay mechanics is that you can grab a ledge on certain points in a level and hang on to it. Now my question, since I've been wrestling with this for quite a while now. How could I actually implement this? I have tried it with animations, but it's just really ugly since the player will snap to a certain point where the animation starts.

    Read the article

  • Direct3D9 application won't write to depth buffer

    - by DeadMG
    I've got an application written in D3D9 which will not write any values to the depth buffer, resulting in incorrect values for the depth test. Things I've checked so far: D3DRS_ZENABLE, set to TRUE D3DRS_ZWRITEENABLE, set to TRUE D3DRS_ZFUNC, set to D3DCMP_LESSEQUAL The depth buffer is definitely bound to the pipeline at the relevant time The depth buffer was correctly cleared before use. I've used PIX to confirm that all of these things occurred as expected. For example, if I clear the depth buffer to 0 instead of 1, then correctly nothing is drawn, and PIX confirms that all the pixels failed the depth test. But I've also used PIX to confirm that my submitted geometry does not write to the depth buffer and so is not correctly rendered. Any other suggestions?

    Read the article

  • Improving SpriteBatch performance for tiles

    - by Richard Rast
    I realize this is a variation on what has got to be a common question, but after reading several (good answers) I'm no closer to a solution here. So here's my situation: I'm making a 2D game which has (among some other things) a tiled world, and so, drawing this world implies drawing a jillion tiles each frame (depending on resolution: it's roughly a 64x32 tile with some transparency). Now I want the user to be able to maximize the game (or fullscreen mode, actually, as its a bit more efficient) and instead of scaling textures (bleagh) this will just allow lots and lots of tiles to be shown at once. Which is great! But it turns out this makes upward of 2000 tiles on the screen each time, and this is framerate-limiting (I've commented out enough other parts of the game to make sure this is the bottleneck). It gets worse if I use multiple source rectangles on the same texture (I use a tilesheet; I believe changing textures entirely makes things worse), or if you tint the tiles, or whatever. So, the general question is this: What are some general methods for improving the drawing of thousands of repetitive sprites? Answers pertaining to XNA's SpriteBatch would be helpful but I'm equally happy with general theory. Also, any tricks pertaining to this situation in particular (drawing a tiled world efficiently) are also welcome. I really do want to draw all of them, though, and I need the SpriteMode.BackToFront to be active, because

    Read the article

  • How can I make a rectangle to an irregular shape?

    - by Anil gupta
    I used masking for breaking an image into the below pattern. Now that it's broken into different pieces I need to make a rectangle of each piece. I need to drag the broken pieces and adjust to the correct position so I can reconstruct the image. To drag and put at the right position I need to make the pieces rectangles but I am not getting the idea of how to make rectangles out of these irregular shapes. How can I make rectangles for manipulating these pieces? This is a follow up to my previous question.

    Read the article

  • Calculating adjacent quads on a quad sphere

    - by Caius Eugene
    I've been experimenting with generating a quad sphere. This sphere subdivides into a quadtree structure. Eventually I'm going to be applying some simplex noise to the vertices of each face to create a terrain like surface. To solve the issue of cracks I want to be able to apply a geomitmap technique of triangle fanning on the edges of each quad, but in order to know the subdivision level of the adjacent quads I need to identify which quads are adjacent to each other. Does anyone know any approaches to computing and storing these adjacent quads for quick lookup? Also It's important that I know which direction they are in so I can easily adjust the correct edge.

    Read the article

  • Saving game data to server [on hold]

    - by Eugene Lim
    What's the best method to save the player's data to the server? Method to store the game saves Which one of the following method should I use ? Using a database structure(e.g.. mySQL) to store the game data as blobs? Using the server hard disk to store the saved game data as binary data files? Method to send saved game data to server What method should I use ? socketIO web socket a web-based scripting language to receive the game data as binary? for example, a php script to handle binary data and save it to file Meta-data I read that some games store saved game meta-data in database structures. What kind of meta data is useful to store?

    Read the article

  • Computing pixel's screen position in a vertex shader: right or wrong?

    - by cubrman
    I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition = output.Position; } PixelShaderFunction() { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition.xy = output.Position / output.Position.w; } PixelShaderFunction() { float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right? Thanks. P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.

    Read the article

  • Drawing beam effect in UDK?

    - by sgrif
    I'm having trouble drawing a particle effect between two actors in UDK - Both the source and the target are not static objects, so as far as I can tell I need to do it in the code not in kismet. Here's what I've got at the moment and it seems to not be doing anything at all. Ideas? BeamEmitter[0] = new(self) class'UTParticleSystemComponent'; BeamEmitter[0].SetAbsolute(false, false, false); BeamEmitter[0].SetTemplate(BeamTemplate[0]); BeamEmitter[0].SetTickGroup(TG_PostUpdateWork); BeamEmitter[0].bUpdateComponentInTick = true; self.AttachComponent(BeamEmitter[0]); BeamEmitter[0].SetBeamEndPoint(2, tarPos); BeamEmitter[0].ActivateSystem();

    Read the article

  • Is there any difference between storing textures and baked lighting for environment meshes?

    - by Ben Hymers
    I assume that when texturing environments, one or several textures will be used, and the UVs of the environment geometry will likely overlap on these textures, so that e.g. a tiling brick texture can be used by many parts of the environment, rather than UV unwrapping the entire thing, and having several areas of the texture be identical. If my assumption is wrong, please let me know! Now, when thinking about baking lighting, clearly this can't be done the same way - lighting in general will be unique to every face so the environment must be UV unwrapped without overlap, and lighting must be baked onto unique areas of one or several textures, to give each surface its own texture space to store its lighting. My questions are: Have I got this wrong? If so, how? Isn't baking lighting going to use a lot of texture space? Will the geometry need two UV sets, one used for the colour/normal texture and one for the lighting texture? Anything else you'd like to add? :)

    Read the article

  • Using textureGrad for anisotropic integration approximation

    - by Amxx
    I'm trying to develop a real time rendering method using real time acquired envmap (cubemap) for lightning. This implies that my envmap can change as often as every frame and I therefore cannot use any method base on precomputation of the envmap (such as convolution with BRDF...) So far my method worked well with Phong BRDF. For specular contribution I direclty read the value in my sampleCube and I use mipmap levels + linear filter for simulating the roughtness of the material considered: int size = textureSize(envmap, 0).x; float specular_level = log2(size * sqrt(3.0)) - 0.5 * log2(ns + 1); vec3 env_specular = ks * specular_color * textureLod(envmap, l_g, specular_level); From this method I would like to upgrade to a microfacet based BRDF. I already have algorithm for evaluating the shape (including anisotropic direction) of the reflection but I cannot manage to read the values I want in my sampleCube. I believe I have to use textureGrad(envmap, l_g, X, Y); with l_g being the reflection direction in global space but I cannot manage to find which values to give to X and Y in order to correctly specify the area I want to consider. What value should I give to X and Y in orther for textureGrad(envmap, l_g, X, Y); to give the same result as textureLod(envmap, l_g, specular_level);

    Read the article

  • Converting to and from local and world 3D coordinate spaces?

    - by James Bedford
    Hey guys, I've been following a guide I found here (http://knol.google.com/k/matrices-for-3d-applications-view-transformation) on constructing a matrix that will allow me to 3D coordinates to an object's local coordinate space, and back again. I've tried to implement these two matrices using my object's look, side, up and location vectors and it seems to be working for the first three coordinates. I'm a little confused as to what I should expect for the w coordinate. Here are couple of examples from the print outs I've made of the matricies that are constructed. I'm passing a test vector of [9, 8, 14, 1] each time to see if I can convert both ways: Basic example: localize matrix: Matrix: 0.000000 -0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 5.237297 -45.530716 11.021271 1.000000 globalize matrix: Matrix: 0.000000 0.000000 1.000000 0.000000 -0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 -11.021271 -45.530716 -5.237297 1.000000 test: Vector4f(9.000000, 8.000000, 14.000000, 1.000000) localTest: Vector4f(14.000000, 8.000000, 9.000000, -161.812256) worldTest: Vector4f(9.000000, 8.000000, 14.000000, -727.491455) More complicated example: localize matrix: Matrix: 0.052504 -0.000689 -0.998258 0.000000 0.052431 0.998260 0.002068 0.000000 0.997241 -0.052486 0.052486 0.000000 58.806095 2.979346 -39.396252 1.000000 globalize matrix: Matrix: 0.052504 0.052431 0.997241 0.000000 -0.000689 0.998260 -0.052486 0.000000 -0.998258 0.002068 0.052486 0.000000 -42.413120 5.975957 -56.419727 1.000000 test: Vector4f(9.000000, 8.000000, 14.000000, 1.000000) localTest: Vector4f(-13.508600, 8.486917, 9.290090, 2.542114) worldTest: Vector4f(9.000190, 7.993863, 13.990230, 102.057129) As you can see in the more complicated example, the coordinates after converting both ways loose some precision, but this isn't a problem. I'm just wondering how I should deal with the last (w) coordinate? Should I just set it to 1 after performing the matrix multiplication, or does it look like I've done something wrong? Thanks in advance for your help!

    Read the article

  • coloring box2d body in LibGDX

    - by ved
    I want to color polygon of box2d in LibGDX. Found below useful class for that. http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/glutils/ShapeRenderer.html But, it is not coloring the body instead making colored shapes. I want colored bodies having all the property like gravity, restitution etc. In brief, I want to make colored ball and surface.And i don't want to attach sprite on bodies. Want just fill color in bodies. Need some guidance????

    Read the article

  • Errors compiling XNA project Windows 8?

    - by ChocoMan
    I'm using Visual Studio 2012 have just installed Windows 8 on my computer and tried to compile a game Im working on in XNA. When the game tried to build, I got the following errors: Error 12 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\SkyDome\skycirrus01.xnb" because it was not found. Error 13 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\Fonts\Arial.xnb" because it was not found. Error 14 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\Fonts\ISOCP2.xnb" because it was not found. skycirrus01.xnb is actually a .fbx. *Arial.xmb* and ISOCP2.xmb are my spritefonts within my project. Prior to installing Windows 8 (store bought) my project compiled. Does anyone know how to convert these to .xnb files? I'm assuming that will make them compatible.

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >