Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 354/657 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • AABB vs OBB Collision Resolution jitter on corners

    - by patt4179
    I've implemented a collision library for a character who is an AABB and am resolving collisions between AABB vs AABB and AABB vs OBB. I wanted slopes for certain sections, so I've toyed around with using several OBBs to make one, and it's working great except for one glaring issue; The collision resolution on the corner of an OBB makes the player's AABB jitter up and down constantly. I've tried a few things I've thought of, but I just can't wrap my head around what's going on exactly. Here's a video of what's happening as well as my code: Here's the function to get the collision resolution (I'm likely not doing this the right way, so this may be where the issue lies): public Vector2 GetCollisionResolveAmount(RectangleCollisionObject resolvedObject, OrientedRectangleCollisionObject b) { Vector2 overlap = Vector2.Zero; LineSegment edge = GetOrientedRectangleEdge(b, 0); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.X = axisRange.Maximum - rProjection.Minimum; } else { overlap.X = rProjection.Maximum - axisRange.Minimum; overlap.X = -overlap.X; } } edge = GetOrientedRectangleEdge(b, 1); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.Y = axisRange.Maximum - rProjection.Minimum; overlap.Y = -overlap.Y; } else { overlap.Y = rProjection.Maximum - axisRange.Minimum; } } return overlap; } And here is what I'm doing to resolve it right now: if (collisionDetection.OrientedRectangleAndRectangleCollide(obb, player.PlayerCollision)) { var resolveAmount = collisionDetection.GetCollisionResolveAmount(player.PlayerCollision, obb); if (Math.Abs(resolveAmount.Y) < Math.Abs(resolveAmount.X)) { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else if (Math.Abs(resolveAmount.Y) <= 30.0f) //Catch cases where the player should be able to step over the top of something { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else { var roundedAmount = (float)Math.Floor(resolveAmount.X); player.PlayerCollision._position.X -= roundedAmount; } } Can anyone see what might be the issue here, or has anyone experienced this before that knows a possible solution? I've tried for a few days to figure this out on my own, but I'm just stumped.

    Read the article

  • 3D physics engine for accurate collision handling on desktop/laptop computers (non-console)

    - by Georges Oates Larsen
    What are your suggestions for a physics engine that satisfies the following criteria? Capable of calculating collisions between multiple concave mesh-based colliders Handles many collisions going on at once (for instance one mesh being wedged between two others, which themselves may be wedged between two meshes) Does not allow for collider passthrough, even at high speeds. For instance, if I am applying force to a programmatically hinged object that makes it spin, I do not want it to pass through another rigidbody that it collides with while spinning. I have this problem using PhysX As implied before, reacts well to hinged objects, preferably has its own implementation of a hinge, but I am willing to program my own. The important part is that it has some sort of interface that guarantees accurate collision tracking even when dealing with these things Platform independent -- runs on mac as well as PC, also not tied down to specific graphics cards I think that's the best way to explain what I am looking for. Basically, I need SUPER reliable collisions. Something that can't be accomplished with a simple ray casting approach that sends a ray from the last position of the object to the current position (as this object may be potentially large and colliding with small objects via rotation) Bonus points for also including an OPEN SOURCE engine.

    Read the article

  • How to achieve selection of a tile from a tile sheet based on an ID?

    - by Bugster
    Let's say I have a tile sheet that contains 8 sprites per sheet. Each sprite is a tile of 30x30. I wrote my own custom map parser/map loader however I'm having trouble extracting a certain tile sprite from the file. I'll describe my problem better in order for everyone to understand. I wrote an enum of materials, each material has a value according to it's location relative to the tile sheet. For example void is 1, grass is 2, rock is 3, etc. So in my tile sheet they are represented as such: +---+---+---+---+---+ | 1 | 2 | 3 | 4 | 5 | +---+---+---+---+---+ Which is equivalent to: +------+-------+-------+ | void | grass | stone | +------+-------+-------+ Basically when rendering, I created a tile class, each tile has 2 coordinates: X and Y (They are calculated automatically) and a material which can be represented either as a number, either as a value (ID). When rendering, I have a vector of sprites which are all taken from 1 file called tilesheet.png, however each of them must only draw a certain portion of the tile sheet, for example say I have something like this: tile coordinateBounds(topLeftX, topLeftY, tileWidth, tileHeight); During the initialization of the map I calculate an array of tiles, and I give each of them their position, their materials based on the values in a map file and a few other variables such as collision. I need to apply the coordinateBounds to each of them according to their material value. For example if the material is grass it should only take the grass sprite from the tilesheet. I must also mention I'm using SFML, and there are no borders or spacing between the tiles.

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • Client side prediction/simulation Question

    - by Legendre
    I found a related question but it doesn't have what I needed. Client A sends input to move at T0. Server receives input at T1. All clients receive the change at T2. Question: With client-side prediction, client A would start moving at T0, client-side. All other clients receive the change at T2, so to them, client A only started moving at T2. If I understand correctly, client B will always see client A's past position and not his current position? How do I sync both client B and client A?

    Read the article

  • Pre baked fractures and explosion : I need an answer for C++

    - by Ken
    What are the prebaked or precomputed explosions or fractures from a programmer viewpoint ? I would like to know how to achieve this in C++ and how this things are usually considered (they are animations? textures?), it would be perfect if there will be some examples available or someone that can picture a broad view about this. I need to add a really small support for this in my code and i need an hint about how to start, i would like to do this on my own without other libraries.

    Read the article

  • How to get this wavefront .obj data onto the frustum?

    - by NoobScratcher
    I've finally figured out how to get the data from a .obj file and store the vertex positions x,y,z into a structure called Points with members x y z which are of type float. I want to know how to get this data onto the screen. Here is my attempt at doing so: //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); std::vector<int>faces; std::vector<Point>points; points.push_back(Point()); Point p; int face[4]; while ( !file.eof() ) { char modelbuffer[10000]; //Get lines and store it in line string file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << "Getting Vertex Positions" << endl; cout << "v" << p.x << endl; cout << "v" << p.y << endl; cout << "v" << p.z << endl; break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); cout << face[0] << endl; cout << face[1] << endl; cout << face[2] << endl; cout << face[3] << endl; faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW); //glBufferData(GL_ARRAY_BUFFER,sizeof(points), &(points[0]), GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glVertexPointer(3, GL_FLOAT, sizeof(points),points.data()); glIndexPointer(GL_DOUBLE, 0, faces.data()); glDrawArrays(GL_QUADS, 0, points.size()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); } As you can see I've clearly failed the end part but I really don't know why its not rendering the data onto the frustum? Does anyone have a solution for this?

    Read the article

  • LibGdx, Texture an Object

    - by Gigi10012
    I want to set texture to an Object, this is my playerobject class: private boolean up; private float speed; private float fallacceleration = 20; private float acceleration = 15; private float maxSpeed = 300; SpriteBatch batch; public Player() { x = MyGdxGame.WIDTH - 9*MyGdxGame.WIDTH/10; y = MyGdxGame.HEIGHT - 3 * MyGdxGame.HEIGHT/10; shapex = new float[4]; shapey = new float[4]; radians = 2*MathUtils.PI; batch = new SpriteBatch(); } private void setShape() { //Simple Arrow Shape ...... } public void update(float dt) { setShape(); } public void draw(ShapeRenderer sr) { sr.setColor(0F, 0F, 0F, 1F); sr.begin(ShapeType.Line); //Drawing Shape .............. sr.end(); } What I have to do to add texture to that object? (I'm using LibGdx)

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • Is there a library that handles hexagon tiled 2D maps?

    - by Pete Mancini
    It would represent a map that is semi-square of arbitrary size. It would have a simple system for representation of the map coordinates such as 0101 (first column, 1st hex). I'd want the map to be able to tell me the distance between two points, and what other hexes lay between those two points as a list or array. I don't care as much about the language but c# or python would be ideal. Does one exist?

    Read the article

  • How can I replicate the look and limitations of the Super NES?

    - by Mikalichov
    I am looking to produce graphics with the same limitations / look that in the Super Nes era. I am specifically looking for graphics similar to Chrono Trigger / FF6. It would be a lot easier to do if I had an idea of the resolution / dpi I am supposed to use. I found that the technical specs for the SNES are: Progressive: 256 × 224, 512 × 224, 256 × 239, 512 × 239 Interlaced: 512 × 448, 512 × 478 But even by using these resolutions, it is pointless if I set it at 72dpi, as I will still have possibly very detailed graphics (that is the main thing, I don't want detailed graphics, I want to go pixelated). I figured it might be related to the sprite size limit, i.e.: Sprites can be 8 × 8, 16 × 16, 32 × 32, or 64 × 64 pixels, each using one of eight 16-color palettes and tiles from one of two blocks of 256 in VRAM. Up to 32 sprites and 34 8 × 8 sprite tiles may appear on any one line. This would work for sprites (characters, objects), but what about maps? Are they built entirely from 8x8 tiles? And then, at what resolution is the end result displayed? It might seem like I am giving the question and answers at the same time, but all of these are suppositions I am making, so could someone confirm or correct them?

    Read the article

  • Best approach to get clicked objects from a display list (2D)

    - by Ixx
    I'm implementing a display list to manage my visuals on screen. I want to know which object is clicked. My objects already have z-order variable. With my current knowledge (almost nothing) the only thing which comes to my mind is make a linear search and get all the objects which contains the clicked point. And then select the object with the highest z-order. But I know there are far better approaches. I think it's something with trees (binary search?). - container display objects and search recursively? just don't know where to start looking, for this concrete case. Any hint link or concrete solution is welcome.

    Read the article

  • (SOLVED) Problems Rendering Text in OpenGL Using FreeType

    - by Sean M.
    I've been following both the FreeType2 tutorial and the WikiBooks tuorial, trying to combine things from them both in order to load and render fonts using the FreeType library. I used the font loading code from the FreeType2 tutorial and tried to implement the rendering code from the wikibooks tutorial (tried being the keyword as I'm still trying to learn model OpenGL, I'm using 3.2). Everything loads correctly and I have the shader program to render the text with working, but I can't get the text to render. I'm 99% sure that it has something to do with how I cam passing data to the shader, or how I set up the screen. These are the code segments that handle OpenGL initialization, as well as Font initialization and rendering: //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //Are these necessary for Modern OpenGL (3.0+)? glViewport(0, 0, g_screenWidth, g_screenHeight); glOrtho(0, g_screenWidth, g_screenHeight, 0, -1, 1); //Init glew int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); Here is the font file (it's slightly too big to post): CFont.h/CFont.cpp Here is the solution zipped up: [solution] (https://dl.dropboxusercontent.com/u/36062916/VoxelShipyard.zip), if anyone feels they need the entire solution. If anyone could take a look at the code, it would be greatly appreciated. Also if someone has a tutorial that is a little more user friendly, that would also be appreciated. Thanks.

    Read the article

  • Octrees as data structure

    - by Christian Frantz
    In my cube world, I want to use octrees to represent my chunks of 20x20x20 cubes for frustum and occlusion culling. I understand how octrees work, I just dont know if I'm going about this the right way. My base octree class is taken from here: http://www.xnawiki.com/index.php/Octree What I'm wondering is how to apply occlusion culling using this class. Does it make sense to have one octree for each cube chunk? Or should I make the octree bigger? Since I'm using cubes, each cube should fit into a node without overlap so that won't be an issue

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • Behavior tree implementation details

    - by angryInsomniac
    I have been looking around for implementation details of behavior trees, the best descriptions I found were by Alex Champarand and some of Damian Isla's talk about AI in Halo 2 (the video of which is locked up in the GDC vault sadly). However, both descriptions fall short of helping one actually create a BT, one particular question has been bugging me for a while. When is the tree in a behavior tree evaluated? Furthermore: If the tree is in the middle of executing a sequence of actions (patrolling waypoints) and a higher priority impulse comes in (distraction sound) , how to switch to that side of the tree seamlessly without resorting to a state machine like system and if it is decided that the impulse was irrelevant (the distraction is too far away to affect this guard), how to go back to the last thing that the guard was doing ? I have quite a few questions like this and I don't wish to flood the board with separate queries so if you know of any resource where questions like these can be answered I would be very grateful.

    Read the article

  • How can I make a 32 bit render target with a 16 bit alpha channel in DirectX?

    - by J Junker
    I want to create a render target that is 32-bit, with 16 bits each for alpha and luminance. The closest surface formats I can find in the DirectX SDK are: D3DFMT_A8L8 // 16-bit using 8 bits each for alpha and luminance. D3DFMT_G16R16F // 32-bit float format using 16 bits for the red channel and 16 bits for the green channel. But I don't think either of these will work, since D3DFMT_A8L8 doesn't have the precision and D3DFMT_G16R16F doesn't have an alpha channel (I need a separate blend state for alpha). How can I create a render target that allows a separate blend state for luminance and alpha, with 16 bit precision on each channel, that doesn't exceed 32 bits per pixel?

    Read the article

  • Pseudo-magnet implementation with chipmunk

    - by Eimantas
    How should I go about implementing "natural" magnet on a certain body in chipmunk space? Context is of simple bodies lying in the space (think chessboard). When one of the figures is activated as a magnet - others should start moving towards it. Currently I'm applying force (cpBodyApplyForce)to the other figures with vector calculated towards the activated figure. However this doesn't really feel "natural". Are there any known algorithms for imitating magnets?

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • How i can sign and/or group a specific set of vertices in a 3D file container like OBJ ? - in Blender

    - by user827992
    I would like to export a 3D model with each part having a name or a label if you will. For example i would like to export a model of an human body and name each part in specifics vertex groups like: left hand, right hand, right foot, head, ears, ... and you got the idea; so i can have a single 3D model that i can explode in various parts if needed. If there is a better technique about how to mark vertex groups in a 3D file please share your solution. As 3D editor i use Blender.

    Read the article

  • SpriteBatch.Begin() making my model not render correctly

    - by manning18
    I was trying to output some debug information using DrawString when I noticed my model suddenly was being rendered like it was inside-out (like the culling had been disabled or something) and the texture maps weren't applied I commented out the DrawString method until I only had SpriteBatch.Begin() and .End() and that was enough to cause the model rendering corruption - when I commented those calls out the model rendered correctly What could this be a symptom of? I've stripped it down to the barest of code to isolate the problem and this is what I noticed. Draw code below (as stripped down as possible) GraphicsDevice.Clear(Color.LightGray); foreach (ModelMesh mesh in TIEAdvanced.Meshes) { foreach (Effect effect in mesh.Effects) { if (effect is BasicEffect) ((BasicEffect)effect).EnableDefaultLighting(); effect.CurrentTechnique.Passes[0].Apply(); } } spriteBatch.Begin(); spriteBatch.DrawString(spriteFont, "Camera Position: " + cameraPosition.ToString(), new Vector2(10, 10), Color.Blue); spriteBatch.End(); GraphicsDevice.DepthStencilState = DepthStencilState.Default; TIEAdvanced.Draw(Matrix.CreateScale(0.025f), viewMatrix, projectionMatrix);

    Read the article

  • Best practices in managing character states

    - by TheBroodian
    While in development of a character, I feel like I'm digging myself deeper into a hole every time I add more functionality to him, creating more bugs and it seems like my code is tripping over itself all over the place. What are the best practices when managing character states for a character that has a large selection of abilities and actions that they can perform, without their abilities interrupting each other and creating a mess overall?

    Read the article

  • 3d trajectory - calculate initial velocity

    - by Skoder
    Hey, I've got a 2D projectile code sample working, but would like to extend it to 3D. How would I calculate the initial velocity of the Z-axis? At the moment, I've got: initVel.X = (float)Math.Cos(45.0); initVel.Y = (float)Math.Sin(45.0); How would I convert this to work in 3D (and add the initial velocity for the Z-axis)? In my example, X is across, Y is up down and Z is going into the screen. I also normalize the vector and multiply it by the speed. Thanks

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >