Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 398/1328 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Can't load vector font in Nuclex Framework

    - by ProgrammerAtWork
    I've been trying to get this to work for the last 2 hours and I'm not getting what I'm doing wrong... I've added Nuclex.TrueTypeImporter to my references in my content and I've added Nuclex.Fonts & Nuclex.Graphics in my main project. I've put Arial-24-Vector.spritefont & Lindsey.spritefont in the root of my content directory. _spriteFont = Content.Load<SpriteFont>("Lindsey"); // works _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes I get this error on the _testFont line: File contains Microsoft.Xna.Framework.Graphics.SpriteFont but trying to load as Nuclex.Fonts.VectorFont. So I've searched around and by the looks of it it has something to do with the content importer & the content processor. For the content importer I have no new choices, so I leave it as it is, Sprite Font Description - XNA Framework for content processor and I select Vector Font - Nuclex Framework And then I try to run it. _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes again I get the following error Error loading "Arial-24-Vector". It does work if I load a sprite, so it's not a pathing problem. I've checked the samples, they do work, but I think they also use a different version of the XNA framework because in my version the "Content" class starts with a capital letter. I'm at a loss, so I ask here. Edit: Something super weird is going on. I've just added the following two lines to a method inside FreeTypeFontProcessor::FreeTypeFontProcessor( Microsoft::Xna::Framework::Content::Pipeline::Graphics::FontDescription ^fontDescription, FontHinter hinter, just to check if code would even get there: System::Console::WriteLine("I AM HEEREEE"); System::Console::ReadLine(); So, I compile it, put it in my project, I run it and... it works! What the hell?? This is weird because I've downloaded the binaries, they didn't work, I've compiled the binaries myself. didn't work either, but now I make a small change to the code and it works? _. So, now I remove the two lines, compile it again and it works again. Someone care to elaborate what is going on? Probably some weird caching problem!

    Read the article

  • Matrix rotation of a rectangle to "face" a given point in 2d

    - by justin.m.chase
    Suppose you have a rectangle centered at point (0, 0) and now I want to rotate it such that it is facing the point (100, 100), how would I do this purely with matrix math? To give some more specifics I am using javascript and canvas and I may have something like this: var position = {x : 0, y: 0 }; var destination = { x : 100, y: 100 }; var transform = Matrix.identity(); this.update = function(state) { // update transform to rotate to face destination }; this.draw = function(ctx) { ctx.save(); ctx.transform(transform); // a helper that just calls setTransform() ctx.beginPath(); ctx.rect(-5, -5, 10, 10); ctx.fillStyle = 'Blue'; ctx.fill(); ctx.lineWidth = 2; ctx.stroke(); ctx.restore(); } Feel free to assume any matrix function you need is available.

    Read the article

  • Does swf provide better compress rate than zlib for png image?

    - by Huang F. Lei
    Somebody told me that when a png image is stored in swf, it's separated to several layer, hence the alpha channel can be compressed better. Is it true? Or, once png image is imported into a swf, it's format is changed, e.g converted into bitmap data, and than compressed by swf's compress algorithm. That's, it is not in png format anymore. I don't know how swf packing its resource, please tell me if you know.

    Read the article

  • Changing Palette for Day/Light Mode using GIMP

    - by J.C.
    Hello, Suppose I've a picture, which want to achieve day/light mode by changing 8bpp color palette. If I want the pixel index of my picture is always fixed for both day mode and night mode. For example, the 1st pixel index is 100. Which I can look up index 100 in day mode palette and night mode palette. How can I use GIMP to do so? My goal is to not update my pixel index of my picture. Also, as you see in two palette, they are not one one mapping. That is index 1 of the day mode palette and index 1 of the night mode palette may not used in the same pixel of the picture, how can I tackle this problem? Actually, my use case is as follow I want to use one 8bpp picture to achieve day/night mode by update only the color palette (without updating the pixel index). The advantage is I only have to prepare 2 256 byte palette rather than saving 2 big pictures in my limited data ram. Thanks a lot

    Read the article

  • Why isn't my lighting working properly? Are my normals messed up?

    - by Radek Slupik
    I'm relatively new to OpenGL and I am trying to draw a 3D model (loaded from a 3ds file using lib3ds) using OpenGL with lighting, but about half of it is drawn in black. I set up the light as such: glEnable(GL_LIGHTING); glShadeModel(GL_SMOOTH); GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor); glEnable(GL_LIGHT0); GLfloat lightColor0[] = {1.0f, 1.0f, 1.0f, 1.0f}; GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 0.0f}; glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); glLightfv(GL_LIGHT0, GL_POSITION, lightPos0); The model is in a VBO and drawn using glDrawArrays. The normals are in a separate VBO, and the normals are calculated using lib3ds_mesh_calculate_vertex_normals: std::vector<std::array<float, 3>> normals; for (std::size_t i = 0; i < model->nmeshes; ++i) { auto& mesh = *model->meshes[i]; std::vector<float[3]> vertex_normals(mesh.nfaces * 3); lib3ds_mesh_calculate_vertex_normals(&mesh, vertex_normals.data()); for (std::size_t j = 0; j < mesh.nfaces; ++j) { auto& face = mesh.faces[j]; normals.push_back(make_array(vertex_normals[j])); } } glBindBuffer(GL_ARRAY_BUFFER, normal_vbo_); glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(decltype(normals)::value_type), normals.data(), GL_STATIC_DRAW); The problem isn't the vertices; the model is drawn correctly when drawing it as a wireframe. I also fixed the normals in Blender using controlN. What could be the problem? Should I store the normals in a different order?

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • Cocos2d-x 3.0 animation frame by frame

    - by Narek
    As I know animations are actions. Now I need to play animation frame by frame. Say I have an animation from N frames. each frame should be played after t delay. Now I want to play animation frame by frame, each frame advance the animation's state. How I can do this? And what about playing actions frame by frame advancing the state in general. I ask because I use ECS, and I deal with frames. P.S. I want to do something like this: Action * a = MoveTo(initialPoint, finalPoint, durationOfAnimation); a->play(0.001 seconds); a->play(0.003 seconds); a->play(0.02 seconds); a->play(0.67 seconds); a->play(0.06 seconds); And see the animation.

    Read the article

  • A* Jump Point Search - how does pruning really work?

    - by DeadMG
    I've come across Jump Point Search, and it seems pretty sweet to me. However, I'm unsure as to how their pruning rules actually work. More specifically, in Figure 1, it states that we can immediately prune all grey neighbours as these can be reached optimally from the parent of x without ever going through node x However, this seems somewhat at odds. In the second image, node 5 could be reached by first going through node 7 and skipping x entirely through a symmetrical path- that is, 6 -> x -> 5 seems to be symmetrical to 6 -> 7 -> 5. This would be the same as how node 3 can be reached without going through x in the first image. As such, I don't understand how these two images are not entirely equivalent, and not just rotated versions of each other. Secondly, I'd like to understand how this algorithm could be generalized to a three-dimensional search volume.

    Read the article

  • Phone crash when try to use vibration on Android

    - by Diego Unanue
    Im developing an app that when you click a button the phone has to vibrate, the issue is that the phone just chashes. Saing that I need permitions to vibrate. I've already set this permition in the build.setting (android manifiest). Here is the code build.settings: settings = { orientation = { default = "portrait", supported = { "portrait", } }, iphone = { plist= { CoronaUseIOS7LandscapeOnlyWorkaround = true, CoronaUseIOS7IPadPhotoPickerLandscapeOnlyWorkaround = true, CoronaUseIOS6LandscapeOnlyWorkaround = true, CoronaUseIOS6IPadPhotoPickerLandscapeOnlyWorkaround = true, UIApplicationExitsOnSuspend = false, UIPrerenderedIcon = true, UIStatusBarHidden = false, CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "[email protected]", "Icon-60.png", "[email protected]", "Icon-72.png", "[email protected]", "Icon-76.png", "[email protected]", "Icon-Small.png", "[email protected]", "Icon-Small-40.png", "[email protected]", "Icon-Small-50.png", "[email protected]", }, }, }, android = { permissions = { { name = ".permission.C2D_MESSAGE", protectionLevel = "signature" }, }, usesPermissions = { "android.permission.INTERNET", "android.permission.VIBRATE", }, }, } the file that uses the vibration is: local onButtonEvent = function (event ) system.vibrate() end I read all post in Corona page without success. Can I see the android manifest to see if the permissions are there. I've read that is a Corona issue not sure.

    Read the article

  • Xna Loading Screens

    - by Cyral
    I'm making a 2D XNA game. I'd like to implement loading screens when stuff has to load for a while. Like when I login to an account, connect to the server, and generate worlds. I'm pretty sure it needs to be multithreaded, because I want to be able to do something like "Generating World 10%...11%...". GenerateWorld() { //Call StartLoading("Generating World"); or something //Starter generating, Updating progress... //End loading screen and fade into world } Help appreciated, I'm new.

    Read the article

  • How to attach a sprite to a TMXTiledMap at a particular coordinate, in AndEngine?

    - by shailenTJ
    I am trying to add a sprite at a "grid" location on the tiled map. The TMX tiled Map is like a grid, and you can access the size of the grid by calling mTMXtiledMap.getTileRows() and mTMXtiledMap.getTileColumns(). I want to add an object at grid location, say (2, 5). My tileMap is of size (10,10). How can I do that? There is no function like mTMXTiledMap.addChild(int x, int y, Entity mEntity). I would appreciate any suggestions!

    Read the article

  • Need help understanding XNA 4.0 BoundingBox vs BoundingSphere Intersection

    - by nerdherd
    I am new to both game programming and XNA, so I apologize if I'm missing a simple concept or something. I have created a simple 3D game with a player and a crate and I'm working on getting my collision detection working properly. Right now I am using a BoundingSphere for my player, and a BoundingBox for the crate. For some reason, XNA only detects a collision when my player's sphere touches the front face of the crate. I'm rendering all the BoundingSpheres and BoundingBoxes as wire frames so I can see what's going on, and everything visually appears to be correct, but I can't figure out this behavior. I have tried these checks: playerSphere.Intersects(crate.getBoundingBox()) playerSphere.Contains(crate.getBoundingBox(), ContainmentType.Intersects) playerSphere.Contains(crate.getBoundingBox()) != ContainmentType.Disjoint But they all seem to produce the same behavior (in other words, they are only true when I hit the front face of the crate). The interesting thing is that when I use a BoundingSphere for my crate the collision is detected as I would expect, but of course this makes the edges less accurate. Any thoughts or ideas? Have I missed something about how BoundingSpheres and BoundingBoxes compute their intersections? I'd be happy to post more code or screenshots to clarify if needed. Thanks!

    Read the article

  • UTF-8 encoding problem with flash mysql and php

    - by alibhp
    Hi, As you may know, I am programming an on-line game using FLASH. I am connecting my FLASH 8 movie with MySQL database through PHP. I am doing very good in that, and I have everything working fine. The problems come when I am trying to insert (Using the INSERT SQL func) data to the database that are non-english. In other words, UTF-8 data. I red a lot of articls about that stuff and found and apply the fallowing: 1. In PHP4, you need to tell the PHP to use UTF-8 when using the xml_parser_crater() func, however, in PHP5 that is done automatically. Even though I told PHP5 to use the UTF-8 when calling the func. Adding the header to the XML sent to PHP from flash. Force the FLASH to use UTF-8 encoding in the preference options. Set the encoding in MySQL to UTF-8 (utf8_unicode_ci with InnoDB engine). I can read and insert the other language data correctly in the phpadmin as well. I did all that in my coding, and still I can't insert such data. one more strange thing is that, when I use the same link, that the FLASH using, with the XML, that the FLASH creating, on the browser (google chrome), I got the data inserted right in the database!!!!! I am about to get crazy about that stuff, What am I missing? what cause the problem? Thank you in advance.

    Read the article

  • xna orbit camera troubles

    - by user17753
    I have a Model named cube to which I load in LoadContent(): cube = Content.Load<Model>("untitled");. In the Draw Method I call DrawModel: private void DrawModel(Model m, Matrix world) { foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = camera.View; effect.Projection = camera.Projection; effect.World = world; } mesh.Draw(); } } camera is of the Camera type, a class I've setup. Right now it is instantiated in the initialization section with the graphics aspect ratio and the translation (world) vector of the model, and the Draw loop calls the camera.UpdateCamera(); before drawing the models. class Camera { #region Fields private Matrix view; // View Matrix for Camera private Matrix projection; // Projection Matrix for Camera private Vector3 position; // Position of Camera private Vector3 target; // Point camera is "aimed" at private float aspectRatio; //Aspect Ratio for projection private float speed; //Speed of camera private Vector3 camup = Vector3.Up; #endregion #region Accessors /// <summary> /// View Matrix of the Camera -- Read Only /// </summary> public Matrix View { get { return view; } } /// <summary> /// Projection Matrix of the Camera -- Read Only /// </summary> public Matrix Projection { get { return projection; } } #endregion /// <summary> /// Creates a new Camera. /// </summary> /// <param name="AspectRatio">Aspect Ratio to use for the projection.</param> /// <param name="Position">Target coord to aim camera at.</param> public Camera(float AspectRatio, Vector3 Target) { target = Target; aspectRatio = AspectRatio; ResetCamera(); } private void Rotate(Vector3 Axis, float Amount) { position = Vector3.Transform(position - target, Matrix.CreateFromAxisAngle(Axis, Amount)) + target; } /// <summary> /// Resets Default Values of the Camera /// </summary> private void ResetCamera() { speed = 0.05f; position = target + new Vector3(0f, 20f, 20f); projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspectRatio, 0.5f, 100f); CalculateViewMatrix(); } /// <summary> /// Updates the Camera. Should be first thing done in Draw loop /// </summary> public void UpdateCamera() { Rotate(Vector3.Right, speed); CalculateViewMatrix(); } /// <summary> /// Calculates the View Matrix for the camera /// </summary> private void CalculateViewMatrix() { view = Matrix.CreateLookAt(position,target, camup); } I'm trying to create the camera so that it can orbit the center of the model. For a test I am calling Rotate(Vector3.Right, speed); but it rotates almost right but gets to a point where it "flips." If I rotate along a different axis Rotate(Vector3.Up, speed); everything seems OK in that direction. So I guess, can someone tell me what I'm not accounting for in the above code I wrote? Or point me to an example of an orbiting camera that can be fixed on an arbitrary point?

    Read the article

  • Cannot convert parameter 1 from 'short *' to 'int *' [closed]

    - by Torben Carrington
    I'm trying to learn pointers and since I recently learned that short int takes up less memory [2 bytes as apposed to the long int's memory usage of 4 which is the default for int] I wanted to create a pointer that uses the memory address of a short integer. I'm following a tutorial in my book about Pointers and it's using the Swap function. The problem is I receive this error the moment I change everything from int to short int: error C2664: 'Swap' : cannot convert parameter 1 from 'short *' to 'int *' 1 Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Since my code is so small here is the whole thing: void Swap(short int *sipX, short int *sipY) { short int siTemp = *sipX; *sipX = *sipY; *sipY = siTemp; } int main() { short int siBig = 100; short int siSmall = 1; std::cout << "Pre-Swap: " << siBig << " " << siSmall << std::endl; Swap(&siBig, &siSmall); std::cout << "Post-Swap: " << siBig << " " << siSmall << std::endl; return 0; }

    Read the article

  • Transform between two 3d cartesian coordinate systems

    - by Pris
    I'd like to know how to get the rotation matrix for the transformation from one cartesian coordinate system (X,Y,Z) to another one (X',Y',Z'). Both systems are defined with three orthogonal vectors as one would expect. No scaling or translation occurs. I'm using OpenSceneGraph and it offers a Matrix convenience class, if it makes finding the matrix easier: http://www.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00403.html.

    Read the article

  • What kind of steering behaviour or logic can I use to get mobiles to surround another?

    - by Vaughan Hilts
    I'm using path finding in my game to lead a mob to another player (to pursue them). This works to get them overtop of the player, but I want them to stop slightly before their destination (so picking the penultimate node works fine). However, when multiple mobs are pursuing the mobile they sometimes "stack on top of each other". What's the best way to avoid this? I don't want to treat the mobs as opaque and blocked (because they're not, you can walk through them) but I want the mobs to have some sense of structure. Example: Imagine that each snake guided itself to me and should surround "Setsuna". Notice how both snakes have chosen to prong me? This is not a strict requirement; even being slightly offset is okay. But they should "surround" Setsuna.

    Read the article

  • Drawing a textured triangle with CPU instead of GPU

    - by Jenko
    I understand the benefits of GPU rendering and such, but for a certain limited application I need to render textured triangles purely using CPU. I've built a 3D engine capable of object handling, transform, projection, culling and the likes ... now all I need is a little code snippet that draws a single textured triangle onto a bitmap... any language accepted! Inputs: Texture bitmap, Triangle U/V/W coords, Triangle X/Y screen coords Output: The textured triangle drawn at the given screen coords I've currently been using a platform function to draw triangles to screen, but I'm looking to handle it myself to speeden up the process.

    Read the article

  • Prototype experience: Unity3D vs UDK

    - by LukeN
    Has anyone yet prototyped a game in both Unity3D and UDK? If so, which features made prototyping the game easier or more difficult in each toolkit? Was one prototype demonstrably better than the other (given the same starting assets)? I'm looking for specific answers with regard to using the toolkit features, not a comparison of available features. E.g. Destructable terrain is easier in toolkit X for reasons Y and Z. I can code, so the limitations of the inbuilt scripting languages are not a problem.

    Read the article

  • Wait till all CCActions have completed

    - by tGilani
    I am developing a simple cocos2d game in which I want to animate two CCSprites simultaneously, and for this purpose I simply set CCActions on respective `CCSprite's as follows. [first runAction:[CCMoveTo actionWithDuration:1 position:secondPosition]]; [second runAction:[CCMoveTo actionWithDuration:1 position:firstPosition]]; Now I want to wait till the animations are complete, so I can perform the next step. How should I wait for these animations to finish? There are actually two method calls, the first one animates the objects via the code above and second call does the other animation. I need to delay the second method call until the animations in first are complete. (I would not like to use CCCallFunc blocks as I want to call the second method from the same caller as the first one.

    Read the article

  • How to fix issue with my 3D first person camera?

    - by dxCUDA
    My camera moves and rotates, but relative to the worlds origin, instead of the players. I am having difficulty rotating the camera and then translating the camera in the direction relative to the camera facing angle. I have been able to translate the camera and rotate relative to the players origin, but not then rotate and translate in the direction relative to the cameras facing direction. My goal is to have a standard FPS-style camera. float yaw, pitch, roll; D3DXMATRIX rotationMatrix; D3DXVECTOR3 Direction; D3DXMATRIX matRotAxis,matRotZ; D3DXVECTOR3 RotAxis; // Set the yaw (Y axis), pitch (X axis), and roll (Z axis) rotations in radians. pitch = m_rotationX * 0.0174532925f; yaw = m_rotationY * 0.0174532925f; roll = m_rotationZ * 0.0174532925f; up = D3DXVECTOR3(0.0f, 1.0f, 0.0f);//Create the up vector //Build eye ,lookat and rotation vectors from player input data eye = D3DXVECTOR3(m_fCameraX, m_fCameraY, m_fCameraZ); lookat = D3DXVECTOR3(m_fLookatX, m_fLookatY, m_fLookatZ); rotation = D3DXVECTOR3(m_rotationX, m_rotationY, m_rotationZ); D3DXVECTOR3 camera[3] = {eye,//Eye lookat,//LookAt up };//Up RotAxis.x = pitch; RotAxis.y = yaw; RotAxis.z = roll; D3DXVec3Normalize(&Direction, &(camera[1] - camera[0]));//Direction vector D3DXVec3Cross(&RotAxis, &Direction, &camera[2]);//Strafe vector D3DXVec3Normalize(&RotAxis, &RotAxis); // Create the rotation matrix from the yaw, pitch, and roll values. D3DXMatrixRotationYawPitchRoll(&matRotAxis, pitch,yaw, roll); //rotate direction D3DXVec3TransformCoord(&Direction,&Direction,&matRotAxis); //Translate up vector D3DXVec3TransformCoord(&camera[2], &camera[2], &matRotAxis); //Translate in the direction of player rotation D3DXVec3TransformCoord(&camera[0], &camera[0], &matRotAxis); camera[1] = Direction + camera[0];//Avoid gimble locking D3DXMatrixLookAtLH(&in_viewMatrix, &camera[0], &camera[1], &camera[2]);

    Read the article

  • Rotate view matrix based on touch coordinates

    - by user1055947
    I'm working on an Android game where I need to rotate the camera around the origin based on the user dragging their finger. My view matrix has initial position of sitting on the negative z and facing origin. I have succeeded in moving the camera through rotation left or right, up or down based on the user dragging the finger, but my problem is obviously that after I drag my finger up/down and rotate say 90 degrees so my intial position of -z is now +y and still facing origin, if I drag my finger left/right I want to rotate from +y to +x, but what happens is it rotates around the pole +y. This is to be expected as I am mapping 2D touch drag coords to 3D space, but I dont know where to start trying to do what I want. Perhaps someone can point me in the right direction, I've been googling for a while now but I don't know what I want to do is called! Edit __ What I was looking for is called an ArcBall, google it for lots of info on it.

    Read the article

  • 2D graphics - why use spritesheets?

    - by Columbo
    I have seen many examples of how to render sprites from a spritesheet but I havent grasped why it is the most common way of dealing with sprites in 2d games. I have started out with 2d sprite rendering in the few demo applications I've made by dealing with each animation frame for any given sprite type as its own texture - and this collection of textures is stored in a dictionary. This seems to work for me, and suits my workflow pretty well, as I tend to make my animations as gif/mng files and then extract the frames to individual pngs. Is there a noticeable performance advantage to rendering from a single sheet rather than from individual textures? With modern hardware that is capable of drawing millions of polygons to the screen a hundred times a second, does it even matter for my 2d games which just deal with a few dozen 50x100px rectangles? The implementation details of loading a texture into graphics memory and displaying it in XNA seems pretty abstracted. All I know is that textures are bound to the graphics device when they are loaded, then during the game loop, the textures get rendered in batches. So it's not clear to me whether my choice affects performance. I suspect that there are some very good reasons most 2d game developers seem to be using them, I just don't understand why.

    Read the article

  • Transform 3D vectors between coordinate systems

    - by Nir Cig
    I've got 6 points in 3D space: A,B,C,D,E,F, that represent 4 vectors. AB is perpendicular to AC and DE is perpendicular to DF. I need to find a transformation matrix M, that transforms AB to DE and AC to DF. In other words: M·AB=DE, M·AC=DF If no scaling was involved, this could be solved with a simple rotation matrix. But since the ratios |AB|/|DE|, |AC|/|DF| might be different, I'm not sure how to proceed.

    Read the article

  • GLSL Bokeh using Quads and Textures

    - by Notoriousaur
    I'm trying to create a depth of field effect with bokeh sprites in GLSL. Specifically, what i would like to do is, for each pixel: See if the pixel is out of the focal range If it is, draw a quad and apply a texture to provide a bokeh sprite. This kind of implementation is seen in the Unreal Engine and by Matt Pettineo, however, both implementations are in DX11 and I'm using OpenGL. I'm a bit stuck on the drawing a quad and applying a texture bit. Does anyone know how I can do this, or provide any relevant links as to how I can do this? Thanks

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >