Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 470/1130 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Access Violation when trying to bind Vertex Object Array

    - by Paul
    I've just started digging into OpenGL and I've run into a problem trying to set a VOA. It's giving me a run-time error of : An unhandled exception of type 'System.AccessViolationException' At // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); I have searched the internet high and low for a solution and I haven't found one. The rest of my function looks like this: int main(array<System::String ^> ^args) { // Initialise GLFW if( !glfwInit() ) { fprintf( stderr, "Failed to initialize GLFW\n" ); return -1; } glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 0); // 4x antialiasing glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3); // We want OpenGL 3.3 glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3); glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //We don't want the old OpenGL // Open a window and create its OpenGL context if( !glfwOpenWindow( 800, 600, 0,0,0,0, 32,0, GLFW_WINDOW ) ) { fprintf( stderr, "Failed to open GLFW window\n" ); glfwTerminate(); return -1; } // Initialize GLEW if (glewInit() != GLEW_OK) { fprintf(stderr, "Failed to initialize GLEW\n"); return -1; } glfwSetWindowTitle( "Game Engine" ); // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); glfwEnable( GLFW_STICKY_KEYS );

    Read the article

  • Mapping dynamic buffers in Direct3D11 in Windows Store apps

    - by Donnie
    I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer. The buffer is declared as a member variable: Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer; Then I create it (which succeeds): D3D11_BUFFER_DESC instanceDesc; ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC)); instanceDesc.Usage = D3D11_USAGE_DYNAMIC; instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT; instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; instanceDesc.MiscFlags = 0; instanceDesc.StructureByteStride = 0; DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer)); However, when I try to map it: D3D11_MAPPED_SUBRESOURCE inst; DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst)); The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.

    Read the article

  • Deferred Shading - Toolkit

    - by AliveDevil
    I recently managed to get some lights rendered in a scene by using a buffer and a for-loop. The problem with this method is the performance drop if more lights are used. I tried to convert Deferred Rendering in XNA4.0 | ROY-T.NL but it is not working, because I am not using any models. I know I have to render color, normals and lights seperate but I don't know how I could get it working. For understanding my structure better I'm using a world-class which holds some chunks. These chunks are loading all vertices from their items. These items have a property which returns the vertices. The item is returning VertexPositionNormalTexture[]. The chunk loads these Vertices and combines them to one large array of VertexPositionNormalTexture via someList.AsParallel().SelectMany(m => m).ToArray()). m is a VertexPositionNormalTexture. someList is List<VertexPositionNormalTexture>. I got my own shader to draw these vertices how I want them to be drawn. The first thing I would try is setting up two RenderTarget2D for rendering the color and normal part. With two different shaders. Than I would have to render the lights and there's the problem: I don't know how. I set up a structure to simplify working with lights but it didn't really help. public struct Light { public Vector3 Position; public Color4 Color; public float Range; public float Intensity; public Light( Vector3 position, Color color, float range, float intensity ) : this() { this.Position = position; this.Color = color; this.Range = range; this.Intensity = intensity; } public float[] Definition { get { return new[] { Position.X, Position.Y, Position.Z, Color.Red, Color.Green, Color.Blue, Intensity, Range }; } } } The next part is equally different because I don't know how to combine the colorMap, normalMap and textureMap to one finalMap. Some information to the system: I'm using SharpDX (Nightly from some months ago) and the SharpDX.Toolkit (I don't want to mess up with Direct3DDevice and similar things). Can someone help me with this problem? If things are missing or I provided insufficient information tell me, I need to get deferred shading working. Things I'm not able to do: create a rendertarget which holds all lights, merge colorMap, normalMap and lightMap to one finalMap and presenting this to the user.

    Read the article

  • Skyrim Creation Kit with Xbox 360

    - by funseiki
    I posted this on stackoverflow but was advised to post here (here is a link to the stackoverflow question). I'm hoping for constructive feedback on its plausibility. Update on progress: It looks like there are ways to stuff files back onto the console (horizon, modio, xplorer360, etc) and they do require some form of signing. As of now, though, I've had no luck. I was hoping I could get away with just placing the ".esp" into the directory containing marketplace downloads for Skyrim, along with the signed ".bsa" file (basically a zipped up file containing any extra content the .esp will need to refer that doesn't exist in the basic game). This doesn't work, at least not in the ways I've tried, so next I'm going to try install the entire game to my flash drive (if possible) and attempt to traverse through the game's directory (this is probably unlikely). If anyone else has suggestions or luck or wants more detail on my failures comment/answer away. Here is the question: I'm thinking about buying the PC version of Skyrim to get the Creation Kit (I already own a copy for the Xbox). I have read the faq and scoured plenty of forums to see if there was some way to mod Skyrim for a console (Xbox 360, in particular), but they are generally coming up negative. I realize the CreationKit is on the PC, but I was wondering if there was a way to set up the '.esp' (hopefully I'm interpreting this correctly) files to be placed on the Xbox 360 file system in a similar manner to how game add-ons are downloaded from the Xbox Live Marketplace. I believe it is possible to transfer saves between the console and the PC (e.g. google: 'skyrim mod xbox360'), but these are referencing items that already exist in the game (e.g. a console command for maximum carry weight does not require reference to new animations or models). It would probably be easier if one could navigate through the xbox's file system to see where the games' files are placed, but with the current setup, the file system is abstracted away. Any help or insight on the matter would be much appreciated. I would love to work on a project that would make it possible to let console gamers experience and enjoy all the great mods available to the PC community.

    Read the article

  • Foreach loop with 2d array of objects

    - by Jacob Millward
    I'm using a 2D array of objects to store data about tiles, or "blocks" in my gameworld. I initialise the array, fill it with data and then attempt to invoke the draw method of each object. foreach (Block block in blockList) { block.Draw(spriteBatch); } I end up with an exception being thrown "Object reference is not set to an instance of an object". What have I done wrong? EDIT: This is the code used to define the array Block[,] blockList; Then blockList = new Block[screenRectangle.Width, screenRectangle.Height]; // Fill with dummy data for (int x = 0; x <= screenRectangle.Width / texture.Width; x++) { for (int y = 0; y <= screenRectangle.Height / texture.Width; y++) { if (y >= screenRectangle.Height / (texture.Width*2)) { blockList[x, y] = new Block(1, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } else { blockList[x, y] = new Block(0, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } } }

    Read the article

  • How can I make an infinite cave using stage3d?

    - by ifree
    I want to make an infinite cave in my 3d game using flash stage3d. But I got no idea about how to build that cave. Can anyone can give me some solution or hint? update: I've tried agal fragment shader like squeae tunnel in shader toy code: var fragmentProgramCode:String = AGALUtils.build() .mov("ft0","v0") .div("ft1","ft0.xy","fc3.xy") .mul("ft2","fc6.x","ft1") .sub("ft3","ft2","fc5.x")//vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy; .mul("ft1","ft3.x","ft3.x") .mul("ft2","ft3.y","ft3.y") .pow("ft4","ft1","fc6.z")//float r = pow( pow(p.x*p.x,16.0) + pow(p.y*p.y,16.0), 1.0/32.0 ); .pow("ft5","ft2","fc6.z") .add("ft1","ft4","ft5") .pow("ft4","ft1","fc6.w") .mov("ft5","fc5")//uv .sub("ft1","fc7.x","ft4") .add("ft5.x","fc7.x","ft1")//uv.x = .5*time + 0.5/r; .mov("ft6","fc0")//for atan .atan2("ft5.y","ft3.y","ft3.x",new <String>["fc7.y","fc5.x","fc7.z","fc7.w","fc8.x","fc8.y","fc8.z","fc8.w","fc9.x","fc9.y"],"ft6") .tex("ft0","ft5","fs0","repeat","linear","nomip")//tex .mul("ft1","ft4","ft4") .mul("ft2","ft1","ft4")//r*r*r .mul("ft1","ft0.xyz","ft2") .mov("ft0.w","fc5.x") .mov("oc","ft1").toString() it can only apply one material,but my project requires different types of material (like floor,ceilling). so ,I create a 3d model Is there anyway to make that 3d model render like "infinity cave"? use agal to make each side of cave's texture move? thanks for your help

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Absorbtion 2d image effect

    - by Ed.
    I want to create a specyfic 2d image effect. It consists in modifying a sprite so it looks like it is being zoomed to a point or "absorbed" by that point. I'm not really sure what is the technical name of this effect so I cannot explain it correctly. Here you can see a video of what I'm talking about, it is the effect when the character absorbs the three glyphs. http://www.youtube.com/watch?v=PIo-GddsMcU&t=4m45s What is the name of this effect? How can I implement it with XNA for 2D textures/sprites?

    Read the article

  • Multiple setInterval in a HTML5 Canvas game

    - by kushsolitary
    I'm trying to achieve multiple animations in a game that I am creating using Canvas (it is a simple ping-pong game). This is my first game and I am new to canvas but have created a few experiments before so I have a good knowledge about how canvas work. First, take a look at the game here. The problem is, when the ball hits the paddle, I want a burst of n particles at the point of contact but that doesn't came right. Even if I set the particles number to 1, they just keep coming from the point of contact and then hides automatically after some time. Also, I want to have the burst on every collision but it occurs on first collision only. I am pasting the code here: //Initialize canvas var canvas = document.getElementById("canvas"), ctx = canvas.getContext("2d"), W = window.innerWidth, H = window.innerHeight, particles = [], ball = {}, paddles = [2], mouse = {}, points = 0, fps = 60, particlesCount = 50, flag = 0, particlePos = {}; canvas.addEventListener("mousemove", trackPosition, true); //Set it's height and width to full screen canvas.width = W; canvas.height = H; //Function to paint canvas function paintCanvas() { ctx.globalCompositeOperation = "source-over"; ctx.fillStyle = "black"; ctx.fillRect(0, 0, W, H); } //Create two paddles function createPaddle(pos) { //Height and width this.h = 10; this.w = 100; this.x = W/2 - this.w/2; this.y = (pos == "top") ? 0 : H - this.h; } //Push two paddles into the paddles array paddles.push(new createPaddle("bottom")); paddles.push(new createPaddle("top")); //Setting up the parameters of ball ball = { x: 2, y: 2, r: 5, c: "white", vx: 4, vy: 8, draw: function() { ctx.beginPath(); ctx.fillStyle = this.c; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); } }; //Function for creating particles function createParticles(x, y) { this.x = x || 0; this.y = y || 0; this.radius = 0.8; this.vx = -1.5 + Math.random()*3; this.vy = -1.5 + Math.random()*3; } //Draw everything on canvas function draw() { paintCanvas(); for(var i = 0; i < paddles.length; i++) { p = paddles[i]; ctx.fillStyle = "white"; ctx.fillRect(p.x, p.y, p.w, p.h); } ball.draw(); update(); } //Mouse Position track function trackPosition(e) { mouse.x = e.pageX; mouse.y = e.pageY; } //function to increase speed after every 5 points function increaseSpd() { if(points % 4 == 0) { ball.vx += (ball.vx < 0) ? -1 : 1; ball.vy += (ball.vy < 0) ? -2 : 2; } } //function to update positions function update() { //Move the paddles on mouse move if(mouse.x && mouse.y) { for(var i = 1; i < paddles.length; i++) { p = paddles[i]; p.x = mouse.x - p.w/2; } } //Move the ball ball.x += ball.vx; ball.y += ball.vy; //Collision with paddles p1 = paddles[1]; p2 = paddles[2]; if(ball.y >= p1.y - p1.h) { if(ball.x >= p1.x && ball.x <= (p1.x - 2) + (p1.w + 2)){ ball.vy = -ball.vy; points++; increaseSpd(); particlePos.x = ball.x, particlePos.y = ball.y; flag = 1; } } else if(ball.y <= p2.y + 2*p2.h) { if(ball.x >= p2.x && ball.x <= (p2.x - 2) + (p2.w + 2)){ ball.vy = -ball.vy; points++; increaseSpd(); particlePos.x = ball.x, particlePos.y = ball.y; flag = 1; } } //Collide with walls if(ball.x >= W || ball.x <= 0) ball.vx = -ball.vx; if(ball.y > H || ball.y < 0) { clearInterval(int); } if(flag == 1) { setInterval(emitParticles(particlePos.x, particlePos.y), 1000/fps); } } function emitParticles(x, y) { for(var k = 0; k < particlesCount; k++) { particles.push(new createParticles(x, y)); } counter = particles.length; for(var j = 0; j < particles.length; j++) { par = particles[j]; ctx.beginPath(); ctx.fillStyle = "white"; ctx.arc(par.x, par.y, par.radius, 0, Math.PI*2, false); ctx.fill(); par.x += par.vx; par.y += par.vy; par.radius -= 0.02; if(par.radius < 0) { counter--; if(counter < 0) particles = []; } } } var int = setInterval(draw, 1000/fps); Now, my function for emitting particles is on line 156, and I have called this function on line 151. The problem here can be because of I am not resetting the flag variable but I tried doing that and got more weird results. You can check that out here. By resetting the flag variable, the problem of infinite particles gets resolved but now they only animate and appear when the ball collides with the paddles. So, I am now out of any solution.

    Read the article

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • How would I balance a multiplayer competitive game

    - by Simon
    I'm looking at my first foray into developing a game, and would love to know whether you guys have any thoughts on game balancing on limited multiplayer games. The game I have in mind involves a neutral player that has to achieve a goal, with two supporting "deity" players who are one of 'good' and 'evil' - One of the deity players would try to help the player achieve their goal, while the other would try to thwart them. Any thoughts or pointers on how I can ensure the deities are balanced? If you want me to expand, I will, just didn't want to give away too much of the game play before I finish it.

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • Vertex fog producing black artifacts

    - by Nick
    I originally posted this question on the XNA forums but got no replies, so maybe someone here can help: I am rendering a textured model using the XNA BasicEffect. When I enable fog, the model outline is still visible as many small black dots when it should be "in the fog". Why is this happening? Here's what it looks like for me -- http://tinypic.com/r/fnh440/6 Here is a minimal example showing my problem: (the ship model that this example uses is from the chase camera sample on this site -- http://xbox.create.msdn.com/en-US/education/catalog/sample/chasecamera -- in case anyone wants to try it out ;)) public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Model model; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here model = Content.Load<Model>("ship"); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.FogEnabled = true; be.FogColor = Color.CornflowerBlue.ToVector3(); be.FogStart = 10; be.FogEnd = 30; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here model.Draw(Matrix.Identity * Matrix.CreateScale(0.01f) * Matrix.CreateRotationY(3 * MathHelper.PiOver4), Matrix.CreateLookAt(new Vector3(0, 0, 30), Vector3.Zero, Vector3.Up), Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 16f/9f, 1, 100)); base.Draw(gameTime); } }

    Read the article

  • Models with more than one mesh in JMonkeyEngine

    - by Andrea Tucci
    I’m a new jmonkey engine developer and I’m beginning to import models. I tried to import simple models and no problems appeared, but when I export some obj models having more than one mesh in the OgreXML format, Blender saves multiple meshes with their own materials (e.g. one mesh for face, another for body etc). Can I export all the meshes in one? I’ve tried to join all the meshes to a major one with blender (face joins body), but when I export the model and then create the Spatial in jme(loading the path of the “merged” mesh), all the meshes that are joined to the major doesn’t have their materials! I give a more clear example: I have an .obj model with 3 meshes and I export it. I have : mesh1.mesh.xml , mesh2.mesh.xml , mesh3.mesh.xml and their materials mesh1.material, mesh2.material mesh3.material so I import the folder in Assets/Models/Test and now I have to create something like: Spatial head = assetManager.loadModel( [path] ); Spatial face = assetManager.loadModel( [path] ) one for each mesh and than attach them to a common node. I think there is a way to merge those mesh maintaining their materials! What do you think? Thanks

    Read the article

  • Importing FBX with multiple meshes in UDK

    - by andresp
    I need to import into UDK a several amount of FBX models (representing buildings) which are composed by various submeshes (walls, windows, roof...). I need to keep the individual meshes (can't use the merge option) but I also need to work with the building as a whole. Do you know if this is possible? How? Also, is there a way to keep the textures assignment for the FBX models after importing them to Unreal? Doing the process manually (importing model, importing texture, assign to the material, assign the material to each mesh and submesh) for 100 or 200 models (to import an entire city from City Engine), isn't viable.

    Read the article

  • how can i move static box2d object.

    - by user5198
    how can i move static box2d sprites. i have tried this tutorial from . http://www.raywenderlich.com/475/how-to-create-a-simple-breakout-game-with-box2d-and-cocos2d-tutorial-part-12. I managed to add another "paddle" object with box2d body, but i can seam to be able to make the code to move the second "paddle" body. Can anyone direct me how to do it? Is there a way to move a "b2_staticBody" box 2d object? i have tried, but i can only move it when i use "b2_dynamicBody" if i used "b2_staticBody" i can move it at all.

    Read the article

  • How do I pass an object location into a vertex shader?

    - by Greg Kassapidis
    I am using Blender Game Engine. I want to create a large flat plane, and deform it locally near a moving object. So far (despite being a beginner at shaders) I've written a vertex shader for the plane which moves the vertices to their correct positions (constant positions, for now). I cannot find a way to swap that constant location with an object's location updated every frame, while the shader is running. I am not even sure if it's possible. I only want to access a specific object's center from the shader.

    Read the article

  • SFML - Moving a sprite on mouseclick

    - by Mike
    I want to be able to move a sprite from a current location to another based upon where the user clicks in the window. This is the code that I have: #include <SFML/Graphics.hpp> int main() { // Create the main window sf::RenderWindow App(sf::VideoMode(800, 600), "SFML window"); // Load a sprite to display sf::Texture Image; if (!Image.LoadFromFile("cb.bmp")) return EXIT_FAILURE; sf::Sprite Sprite(Image); // Define the spead of the sprite float spriteSpeed = 200.f; // Start the game loop while (App.IsOpened()) { if (sf::Keyboard::IsKeyPressed(sf::Keyboard::Escape)) App.Close(); if (sf::Mouse::IsButtonPressed(sf::Mouse::Right)) { Sprite.SetPosition(sf::Mouse::GetPosition(App).x, sf::Mouse::GetPosition(App).y); } // Clear screen App.Clear(); // Draw the sprite App.Draw(Sprite); // Update the window App.Display(); } return EXIT_SUCCESS; } But instead of just setting the position I want to use Sprite.Move() and gradually move the sprite from one position to another. The question is how? Later I plan on adding a node system into each map so I can use Dijkstra's algorithm, but I'll still need this for moving between nodes.

    Read the article

  • Question about component based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • Creating a curved mesh on inside of sphere based on texture image coordinates

    - by user5025
    In Blender, I have created a sphere with a panoramic texture on the inside. I have also manually created a plane mesh (curved to match the size of the sphere) that sits on the inside wall where I can draw a different texture. This is great, but I really want to reduce the manual labor, and do some of this work in a script -- like having a variable for the panoramic image, and coordinates of the area in the photograph that I want to replace with a new mesh. The hardest part of doing this is going to be creating a curved mesh in code that can sit on the inside wall of a sphere. Can anyone point me in the right direction?

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >