Search Results

Search found 12087 results on 484 pages for 'game mechanics'.

Page 244/484 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Panning with the OpenGL Camera / View Matrix

    - by Pris
    I'm gonna try this again I've been trying to setup a simple camera class with OpenGL but I'm completely lost and I've made zero progress creating anything useful. I'm using modern OpenGL and the glm library for matrix math. To get the most basic thing I can think of down, I'd like to pan an arbitrarily positioned camera around. That means move it along its own Up and Side axes. Here's a picture of a randomly positioned camera looking at an object: It should be clear what the Up (Green) and Side (Red) vectors on the camera are. Even though the picture shows otherwise, assume that the Model matrix is just the identity matrix. Here's what I do to try and get it to work: Step 1: Create my View/Camera matrix (going to refer to it as the View matrix from now on) using glm::lookAt(). Step 2: Capture mouse X and Y positions. Step 3: Create a translation matrix mapping changes in the X mouse position to the camera's Side vector, and mapping changes in the Y mouse position to the camera's Up vector. I get the Side vector from the first column of the View matrix. I get the Up vector from the second column of the View matrix. Step 4: Apply the translation: viewMatrix = glm::translate(viewMatrix,translationVector); But this doesn't work. I see that the mouse movement is mapped to some kind of perpendicular axes, but they're definitely not moving as you'd expect with respect to the camera. Could someone please explain what I'm doing wrong and point me in the right direction with this camera stuff?

    Read the article

  • Create dynamic buffer SharpDX

    - by fedab
    I want to set a buffer that is updated every frame but can't figure it out, what i have to do. The only working thing i have is this: mdexcription = new BufferDescription(Matrix.SizeInBytes * Matrices.Length, ResourceUsage.Dynamic, BindFlags.VertexBuffer, CpuAccessFlags.Write, ResourceOptionFlags.None, 0); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); Draw: //Change Matrices (Matrix[]) every frame... instanceBuffer.Dispose(); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); I guess Dispose() and creating a new buffer is slow and can be done much faster. I've read about DataStream but i do not know, how to set this up properly. What steps do i have to do to set up a DataStream to achieve fast every-frame update?

    Read the article

  • Blender DirectX exporter to Panda3D

    - by jakebird451
    I have been experimenting with Panda3D lately. I have a character made in Blender with various bones and currently with one animation that I wish to export to a *.x format for Panda3D. My current attempt was to export the model was to first export with bones [Armatures] by checking the "Export Armatures" button in the export menu (file name: char.x). Thanks to the *.x file format, I read the file and it seems to have the same bone structure format as the model (with parenting and matrix positional data). The second export was selecting Animations - Full Animation to provide just the animation (file name: char_idle.x). The models exported just fine. I am not sure about the animation yet, but the file seems to be just fine. This is my code for loading the model into python & Panda3D: self.model = Actor("char.x",{"char_idle.x"}) When I run the program the command line provides a couple of errors, the main errors of interest are: :Actor(warning): char.x is not a character! and ... File "C:\Panda3D-1.8.0\direct\actor\Actor.py", line 284, in __init__ if (type(anims[anims.keys()[0]])==type({})): AttributeError: 'set' object has no attribute 'keys' The first error is the most interesting to me. The model works if I leave the animation dictionary blank. With no animations loaded the character appears in its un-animated T position, however the actor warning still shows up. The character should include the various bones when I exported the model right? I am not that experienced with blender, I'm just a programmer. So if the problem lies in blender please try to keep that in mind when posting a reply. I'll try my best to keep up. I also tried to print out the bone structure without any animations loaded and it provides a similar error with the line print self.model.listJoints(): File "C:\Panda3D-1.8.0\direct\actor\Actor.py", line 410, in listJoints Actor.notify.error("no part named: %s" % (partName)) File "C:\Panda3D-1.8.0\direct\directnotify\Notifier.py", line 132, in error raise exception(errorString) StandardError: no part named: modelRoot I really hope it is a simple exporting fix.

    Read the article

  • Using heavyweight ORM implementation for light based games

    - by Holland
    I'm just about to engulf myself in an MVC-based/Component architecture in C#, using MySQL's connector/Net for the data storage, and probably some NHibernate/FluentNHibernate Object-relational-mapping to map out the data structure. The goal is to build a scalable 2D RPG. Then I think about it...and I can't help but think this seems a little "heavy weight" for a 2D RPG, especially one which, while I plan to incorporate a lot of functionality and entertaining gameplay, may be ported to something like Windows Phone or Android in the future. Yet, on the other hand even a 2-Dimensional RPG can become very complicated, and therefore must incorporate a lot of functionality. While this can be accomplished with text/XML/JSON for data storage, is there a better way? Is something such as Object-Relational-Mapping useful in such an application? So, what do you think? Would you say that there is a place for such technologies? I don't know what to think...

    Read the article

  • how to create 2D collision detection

    - by Aidan Mueller
    I would like to know the best or most effective way to test for 2D collision. I also can do AABBs but when you have a line, for example, that is rotated 45º, and it is really long. it will be hitting things when it shouldn't. I might be able to go through the pixels to see if they are touching others, but that might be slow if I had a big picture. and it might add some complications if I had a movie clip made of several images. How do I check collision between two Images? How would I do circle to box? Please help : ) PS: I do know java so you can write with java syntax and then use a made up GL

    Read the article

  • Camera wont stay behind model after pitch, then rotation

    - by ChocoMan
    I have a camera position behind a model. Currently, if I push the left thumbstick making my model move forward, backward, or strafe, the camera stays with the model. If I push the right thumbstick left or right, the model rotates in those directions fine along with the camera rotating while maintaining its position relatively behind the model. But when I pitch the model up or down, then rotate the model afterwards, the camera moves slightly rotates in a clock-like fashion behind the model. If I do a few rotations of the model and try to pitch the camera, the camera will eventually be looking at the side, then eventually the front of the model while also rotating in a clock-like fashion. My question is, how do I keep the camera to pitch up and down behind the model no matter how much the model has rotated? Here is what I got: // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { // Rotates Camera with model Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(angularSpeed); // Pitches Camera around model Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(angularSpeed); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around with model (only seeing back of model) public void cameraYaw(Vector3 axisYaw, float yaw) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; } // Raise camera above or below model's shoulders public void cameraPitch(Vector3 axisPitch, float pitch) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } // Call in update method public void updateCamera() { cameraYaw(Vector3.Up, Yaw); cameraPitch(Vector3.Right, Pitch); } NOTE: I tried to use addPitch just like addRotation but it didn't work...

    Read the article

  • Matrix rotation of a rectangle to "face" a given point in 2d

    - by justin.m.chase
    Suppose you have a rectangle centered at point (0, 0) and now I want to rotate it such that it is facing the point (100, 100), how would I do this purely with matrix math? To give some more specifics I am using javascript and canvas and I may have something like this: var position = {x : 0, y: 0 }; var destination = { x : 100, y: 100 }; var transform = Matrix.identity(); this.update = function(state) { // update transform to rotate to face destination }; this.draw = function(ctx) { ctx.save(); ctx.transform(transform); // a helper that just calls setTransform() ctx.beginPath(); ctx.rect(-5, -5, 10, 10); ctx.fillStyle = 'Blue'; ctx.fill(); ctx.lineWidth = 2; ctx.stroke(); ctx.restore(); } Feel free to assume any matrix function you need is available.

    Read the article

  • Blending textures together, texture fade over / fade in

    - by Deukalion
    What is the best way to render a texture overlapping effect? Like in this example: I want either the grass to fade in to the snow texture, or the other way around. No rough edges. Somehow make them blend over. So the grass has a bit of snow or the snow has a bit of grass How is this possible during runtime? If that's possible. I don't render this by using the SpriteBatch, since the ground isn't rectangles (they can be moved). This is the way I render each shape (each one of those squares): // LoadTexture // Apply EffectPass device.DrawUserIndexedPrimitives<VertexPositionNormalTexture> ( PrimitiveType.TriangleList, render.Item.Points, // Array of VertexPositionNormalTexture 0, render.Item.Points.Length, render.Item.Indexes, // Array of int indexes (triangulation) 0, render.Item.Indexes.Length / 3, VertexPositionNormalTexture.VertexDeclaration );

    Read the article

  • How best to handle ID3D11InputLayout in rendering code?

    - by JohnB
    I'm looking for an elegant way to handle input layouts in my directx11 code. The problem I have that I have an Effect class and a Element class. The effect class encapsulates shaders and similar settings, and the Element class contains something that can be drawn (3d model, lanscape etc) My drawing code sets the device shaders etc using the effect specified and then calls the draw function of the Element to draw the actual geometry contained in it. The problem is this - I need to create an D3D11InputLayout somewhere. This really belongs in the Element class as it's no business of the rest of the system how that element chooses to represent it's vertex layout. But in order to create the object the API requires the vertex shader bytecode for the vertex shader that will be used to draw the object. In directx9 it was easy, there was no dependency so my element could contain it's own input layout structures and set them without the effect being involved. But the Element shouldn't really have to know anything about the effect that it's being drawn with, that's just render settings, and the Element is there to provide geometry. So I don't really know where to store and how to select the InputLayout for each draw call. I mean, I've made something work but it seems very ugly. This makes me thing I've either missed something obvious, or else my design of having all the render settings in an Effect, the Geometry in an Element, and a 3rd party that draws it all is just flawed. Just wondering how anyone else handles their input layouts in directx11 in a elegant way?

    Read the article

  • Normal maps red in OpenGL?

    - by KaiserJohaan
    I am using Assimp to import 3d models, and FreeImage to parse textures. The problem I am having is that the normal maps are actually red rather than blue when I try to render them as normal diffuse textures. http://i42.tinypic.com/289ing3.png When I open the images in a image-viewing program they do indeed show up as blue. Heres when I create the texture; OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } Here is my fragment shader. You can see I just commented out the normal-map parsing and treated the normal map texture as the diffuse texture to display it and illustrate the problem. As for the rest of the code it interacts as expected with the diffuse textures so I dont see a obvious problem there. "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Why is this? does normal-map textures need some sort of special treatment in opengl?

    Read the article

  • Drawing particles with CPU instead of GPU (XNA)

    - by Helix
    I'm trying out modifications to the following particle system. http://create.msdn.com/en-US/education/catalog/sample/particle_3d I have a function such that when I press Space, all the particles have their positions and velocities set to 0. for (int i = 0; i < particles.GetLength(0); i++) { particles[i].Position = Vector3.Zero; particles[i].Velocity = Vector3.Zero; } However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0). As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything. I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.

    Read the article

  • How to create water like in new super mario bros?

    - by user1103457
    I assume the water in New super mario bros works the same as in the first part of this tutorial: http://gamedev.tutsplus.com/tutorials/implementation/make-a-splash-with-2d-water-effects/ But in new super mario bros the water also has constant waves on the surface, and the splashes look very different. What's also a difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. When I refer to the splashes in new super mario bros I am referring to the splashes that the player creates when jumping in and out of the water. For reference you could use this video: http://www.ign.com/videos/2012/11/17/new-super-mario-bros-u-3-star-coin-walkthrough-sparkling-waters-1-waterspout-beach just after 00:50, when the camera isn't moving you can get a good look at the water and the constant waves. there are also some good examples of the splashes during that time. How do they create the constant waves and the splashes? I am programming in XNA. (I have tried this myself but couldn't really get it all to work well together) (and as bonus questions: how do they create the light spots just under the surface of the waves, and how do they texture the deeper parts of the water? This is the first time I try to create water like this.)

    Read the article

  • Efficient visualization of a large voxelized volume

    - by Alejandro Piad
    Lets consider a large voxelized volume stored in an oct-tree or any other convenient structure. This volume represents, for instance, a landscape, where each block is either empty (air), or it has an specific material that will be later used to apply a texture. Voxels that are next to each other represent connected sections of the surface. What I need is an algorithm to generate a mesh from this voxels that represents the volume, with the following caracteristics: All the "holes" in the voxelized volume are correct. All the connections are correct, i.e. seamless. The surface appears smooth. In a broad sense, I want to somehow preserve the surface topology, meaning that connected sections remain connected in the resulting mesh and that the surface has a curvature that responds to the voxels topology. Imagine trying to render the Minecraft world but getting the mountain ladders to be smooth instead of blocky.

    Read the article

  • How do I draw a scene with 2 nested frames

    - by Guido Granobles
    I have been trying for long time to figure out this: I have loaded a model from a directx file (I am using opengl and Java) the model have a hierarchical system of nested reference frames (there are not bones). There are just 2 frames, one of them is called x3ds_Torso and it has a child frame called x3ds_Arm_01. Each one of them has a mesh. The thing is that I can't draw the arm connected to the body. Sometimes the body is in the center of the screen and the arm is at the top. Sometimes they are both in the center. I know that I have to multiply the matrix transformation of every frame by its parent frame starting from the top to the bottom and after that I have to multiply every vertex of every mesh by its final transformation matrix. So I have this: public void calculeFinalMatrixPosition(Bone boneParent, Bone bone) { System.out.println("-->" + bone.name); if (boneParent != null) { bone.matrixCombined = bone.matrixTransform.multiply(boneParent.matrixCombined); } else { bone.matrixCombined = bone.matrixTransform; } bone.matrixFinal = bone.matrixCombined; for (Bone childBone : bone.boneChilds) { calculeFinalMatrixPosition(bone, childBone); } } Then I have to multiply every vertex of the mesh: public void transformVertex(Bone bone) { for (Iterator<Mesh> iterator = meshes.iterator(); iterator.hasNext();) { Mesh mesh = iterator.next(); if (mesh.boneName.equals(bone.name)) { float[] vertex = new float[4]; double[] newVertex = new double[3]; if (mesh.skinnedVertexBuffer == null) { mesh.skinnedVertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); } mesh.vertexBuffer.buffer.rewind(); while (mesh.vertexBuffer.buffer.hasRemaining()) { vertex[0] = mesh.vertexBuffer.buffer.get(); vertex[1] = mesh.vertexBuffer.buffer.get(); vertex[2] = mesh.vertexBuffer.buffer.get(); vertex[3] = 1; newVertex = bone.matrixFinal.transpose().multiply(vertex); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[0])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[1])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[2])); } mesh.vertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); mesh.skinnedVertexBuffer.buffer.rewind(); mesh.vertexBuffer.buffer.put(mesh.skinnedVertexBuffer.buffer); } } for (Bone childBone : bone.boneChilds) { transformVertex(childBone); } } I know this is not the more efficient code but by now I just want to understand exactly how a hierarchical model is organized and how I can draw it on the screen. Thanks in advance for your help.

    Read the article

  • How can I support objects larger than a single tile in a 2D tile engine?

    - by Yheeky
    I´m currently working on a 2D Engine containing an isometric tile map. It´s running quite well but I'm not sure if I´ve chosen the best approach for that kind of engine. To give you an idea what I´m thinking about right now, let's have a look at a basic object for a tile map and its objects: public class TileMap { public List<MapRow> Rows = new List<MapRow>(); public int MapWidth = 50; public int MapHeight = 50; } public class MapRow { public List<MapCell> Columns = new List<MapCell>(); } public class MapCell { public int TileID { get; set; } } Having those objects it's just possible to assign a tile to a single MapCell. What I want my engine to support is like having groups of MapCells since I would like to add objects to my tile map (e.g. a house with a size of 2x2 tiles). How should I do that? Should I edit my MapCell object that it may has a reference to other related tiles and how can I find an object while clicking on single MapCells? Or should I do another approach using a global container with all objects in it?

    Read the article

  • Cannot convert parameter 1 from 'short *' to 'int *' [closed]

    - by Torben Carrington
    I'm trying to learn pointers and since I recently learned that short int takes up less memory [2 bytes as apposed to the long int's memory usage of 4 which is the default for int] I wanted to create a pointer that uses the memory address of a short integer. I'm following a tutorial in my book about Pointers and it's using the Swap function. The problem is I receive this error the moment I change everything from int to short int: error C2664: 'Swap' : cannot convert parameter 1 from 'short *' to 'int *' 1 Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Since my code is so small here is the whole thing: void Swap(short int *sipX, short int *sipY) { short int siTemp = *sipX; *sipX = *sipY; *sipY = siTemp; } int main() { short int siBig = 100; short int siSmall = 1; std::cout << "Pre-Swap: " << siBig << " " << siSmall << std::endl; Swap(&siBig, &siSmall); std::cout << "Post-Swap: " << siBig << " " << siSmall << std::endl; return 0; }

    Read the article

  • Developing for Chrome App/Android?

    - by Johnny Quest
    I have been developing for win7 mobile (XNA/silverlight and will continue to do so, love everything about it) but I wanted to branch a few of my more polished games to google app store online, and perhaps android(though not sure, as with all the different versions it makes learning/loading applications a bit tricky) What is the most versatile language to start learning from chrome apps/android: Java would be excellent for android, but could I port it to a web app for chrome? (and its close to C#) Flash would work for a web app as I can just embed it into a html page (have done actionscript before, didn't care much for the IDE though), but would it also work on android? or I guess there is always C/C++ but haven't heard much about that, though I think it works for both (though C++ does interest me) Any advice would be excellent, thanks.

    Read the article

  • I am looking for an graduation project idea for bacelor of computer engineering [closed]

    - by project idea
    I am interested in computer graphics and I have developed many hobby projects, mostly 2D and 3D games/scenes in directX and openGL, But for a grad project, proffesors wont allow games. I browsed many similar questions here and I am convinced project should be something I am really interested in as I will give considerable time to it. But apart from games I am not able to decide on the topic. I am also open to ideas on social apps and android.

    Read the article

  • Alpha blend 3D png texture in XNA

    - by ProgrammerAtWork
    I'm trying to draw a partly transparent texture a plane, but the problem is that it's incorrectly displaying what is behind that texture. Pseudo code: vertices1 basiceffect1 // The vertices of vertices1 are located BEHIND vertices2 vertices2 basiceffect2 // The vertices of vertices2 are located IN FRONT vertices1 GraphicsDevice.Clear(Blue); PrimitiveBatch.Begin(); //if I draw like this: PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) //Everything gets draw correctly, I can see the texture of vertices2 trough //the transparent parts of vertices1 //but if I draw like this: PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) //I cannot see the texture of vertices1 in behind the texture of vertices2 //Instead, the texture vertices2 gets drawn, and the transparent parts are blue //The clear color PrimitiveBatch.Draw(vertice PrimitiveBatch.End(); My question is, Why does the order in which I call draw matter?

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Custom shadow mapping in Unity 3D Free Edition

    - by nosferat
    Since real time hard and soft shadows are Unity 3D Pro only features I thought I will learn Cg programming and create my own shadow mapping shader. But after some digging I found that the shadow mapping technique uses depth textures, and in Unity depth values can be accessed through a Render Texture object, which is Unity Pro only again. So is it true, that I cannot create real time shadow shaders as a workaround to the limitations of the free version?

    Read the article

  • How to fix OpenGL Co-ordinate System in SFML?

    - by Marc Alexander Reed
    My OpenGL setup is somehow configured to work like so: (-1, 1) (0, 1) (1, 1) (-1, 0) (0, 0) (1, 0) (-1, -1) (0, -1) (1, -1) How do I configure it so that it works like so: (0, 0) (SW/2, 0) (SW, 0) (0, SH/2) (SW/2, SH/2) (SW, SH/2) (0, SH) (SW/2, SH) (SW/2, SH) SW as Screen Width. SH as Screen Height. This solution would have to fix the problem of I can't translate significantly(1) on the Z axis. Depth doesn't seem to be working either. The Perspective code I'm using is that of my WORKING GLUT OpenGL code which has a cool 3d grid and camera system etc. But my OpenGL setup doesn't seem to work with SFML. Help me guys. :( Thanks in advance. :) #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <SFML/Audio.hpp> #include <SFML/Network.hpp> #include <SFML/OpenGL.hpp> #include "ResourcePath.hpp" //Mac-only #define _USE_MATH_DEFINES #include <cmath> double screen_width = 640.f; double screen_height = 480.f; int main (int argc, const char **argv) { sf::ContextSettings settings; settings.depthBits = 24; settings.stencilBits = 8; settings.antialiasingLevel = 2; sf::Window window(sf::VideoMode(screen_width, screen_height, 32), "SFML OpenGL", sf::Style::Close, settings); window.setActive(); glEnable(GL_DEPTH_TEST); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_NORMALIZE); glEnable(GL_COLOR_MATERIAL); glShadeModel(GL_SMOOTH); glViewport(0, 0, screen_width, screen_height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //glOrtho(0.0f, screen_width, screen_height, 0.0f, -100.0f, 100.0f); gluPerspective(45.0f, (double) screen_width / (double) screen_height , 0.f, 100.f); glClearColor(0.f, 0.f, 1.f, 0.f); //blue while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { switch (event.type) { case sf::Event::Closed: window.close(); break; } switch (event.key.code) { case sf::Keyboard::Escape: window.close(); break; case 'W': break; case 'S': break; case 'A': break; case 'D': break; } } glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.f, 0.f, 0.f); glPushMatrix(); glBegin(GL_QUADS); glColor3f(1.f, 0.f, 0.f); glVertex3f(-1.f, 1.f, 0.f); glColor3f(0.f, 1.f, 0.f); glVertex3f(1.f, 1.f, 0.f); glColor3f(1.f, 0.f, 1.f); glVertex3f(1.f, -1.f, 0.f); glColor3f(0.f, 0.f, 1.f); glVertex3f(-1.f, -1.f, 0.f); glEnd(); glPopMatrix(); window.display(); } return EXIT_SUCCESS; }

    Read the article

  • Interpolating Matrices

    - by sebf
    Hello, Apologies if I am missing something very obvious (likely!) but is there anything wrong with interpolating between two matrices by: float d = (float)(targetTime.Ticks - keyframe_start.ticks) / (float)(keyframe_end.ticks - keyframe_start.ticks); return ((keyframe_start.Transform * (1 - d)) + (keyframe_end.Transform * d)); As in my app, when I try an use this to interpolate between two keyframes, the model begins to 'shrink' - the severity based on how far between the two keyframes the target time is; its worst when the transform split is ~50/50.

    Read the article

  • Rotation of bitmap using a frame by frame animation

    - by pengume
    Hey every one I know this has probably been asked a ton of times but I just wanted to clarify if I am approaching this correctly, since I ran into some problems rotating a bitmap. So basically I have one large bitmap that has four frames drawn on it and I only draw one at a time by looping through the bitmap by increments to animate walking. I can get the bitmap to rotate correctly when it is not moving but once the animation starts it starts to cut off alot of the image and sometimes becomes very fuzzy. I have tried public void draw(Canvas canvas,int pointerX, int pointerY) { Matrix m; if (setRotation){ // canvas.save(); m = new Matrix(); m.reset(); // spriteWidth and spriteHeight are for just the current frame showed m.setTranslate(spriteWidth / 2, spriteHeight / 2); //get and set rotation for ninja based off of joystick m.preRotate((float) GameControls.getRotation()); //create the rotated bitmap flipedSprite = Bitmap.createBitmap(bitmap , 0, 0,bitmap.getWidth(),bitmap.getHeight() , m, true); //set new bitmap to rotated ninja setBitmap(flipedSprite); // canvas.restore(); Log.d("Ninja View", "angle of rotation= " +(float) GameControls.getRotation()); setRotation = false; } And then the Draw Method here // create the destination rectangle for the ninjas current animation frame // pointerX and pointerY are from the joystick moving the ninja around destRect = new Rect(pointerX, pointerY, pointerX + spriteWidth, pointerY + spriteHeight); canvas.drawBitmap(bitmap, getSourceRect(), destRect, null); The animation is four frames long and gets incremented by 66 (the size of one of the frames on the bitmap) for every frame and then back to 0 at the end of the loop.

    Read the article

  • How to fix issue with my 3D first person camera?

    - by dxCUDA
    My camera moves and rotates, but relative to the worlds origin, instead of the players. I am having difficulty rotating the camera and then translating the camera in the direction relative to the camera facing angle. I have been able to translate the camera and rotate relative to the players origin, but not then rotate and translate in the direction relative to the cameras facing direction. My goal is to have a standard FPS-style camera. float yaw, pitch, roll; D3DXMATRIX rotationMatrix; D3DXVECTOR3 Direction; D3DXMATRIX matRotAxis,matRotZ; D3DXVECTOR3 RotAxis; // Set the yaw (Y axis), pitch (X axis), and roll (Z axis) rotations in radians. pitch = m_rotationX * 0.0174532925f; yaw = m_rotationY * 0.0174532925f; roll = m_rotationZ * 0.0174532925f; up = D3DXVECTOR3(0.0f, 1.0f, 0.0f);//Create the up vector //Build eye ,lookat and rotation vectors from player input data eye = D3DXVECTOR3(m_fCameraX, m_fCameraY, m_fCameraZ); lookat = D3DXVECTOR3(m_fLookatX, m_fLookatY, m_fLookatZ); rotation = D3DXVECTOR3(m_rotationX, m_rotationY, m_rotationZ); D3DXVECTOR3 camera[3] = {eye,//Eye lookat,//LookAt up };//Up RotAxis.x = pitch; RotAxis.y = yaw; RotAxis.z = roll; D3DXVec3Normalize(&Direction, &(camera[1] - camera[0]));//Direction vector D3DXVec3Cross(&RotAxis, &Direction, &camera[2]);//Strafe vector D3DXVec3Normalize(&RotAxis, &RotAxis); // Create the rotation matrix from the yaw, pitch, and roll values. D3DXMatrixRotationYawPitchRoll(&matRotAxis, pitch,yaw, roll); //rotate direction D3DXVec3TransformCoord(&Direction,&Direction,&matRotAxis); //Translate up vector D3DXVec3TransformCoord(&camera[2], &camera[2], &matRotAxis); //Translate in the direction of player rotation D3DXVec3TransformCoord(&camera[0], &camera[0], &matRotAxis); camera[1] = Direction + camera[0];//Avoid gimble locking D3DXMatrixLookAtLH(&in_viewMatrix, &camera[0], &camera[1], &camera[2]);

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >