Search Results

Search found 359 results on 15 pages for 'matrices'.

Page 8/15 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Sparse quadratic program solver

    - by Jacob
    This great SO answer points to a good sparse solver, but I've got constraints on x (for Ax = b) such that each element in x is >=0 an <=N. The first thing which comes to mind is an QP solver for large sparse matrices. Also, A is huge (around 2e6x2e6) but very sparse with <=4 elements per row. Any ideas/recommendations?

    Read the article

  • dynamic memory allocation problem

    - by wantobegeek
    i am working on a program that requires me to make use of 4 matrices sized [1000][1000]. i have created them using malloc().But when I try running the program ..it just crashes and the memory usage shoots upto 2.5 GB.Pls suggest any solution as soon as possible.I wud be grateful..

    Read the article

  • Using fft2 with reshaping for an RGB filter

    - by Mahmoud Aladdin
    I want to apply a filter on an image, for example, blurring filter [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]. Also, I'd like to use the approach that convolution in Spatial domain is equivalent to multiplication in Frequency domain. So, my algorithm will be like. Load Image. Create Filter. convert both Filter & Image to Frequency domains. multiply both. reconvert the output to Spatial Domain and that should be the required output. The following is the basic code I use, the image is loaded and displayed as cv.cvmat object. Image is a class of my creation, it has a member image which is an object of scipy.matrix and toFrequencyDomain(size = None) uses spf.fftshift(spf.fft2(self.image, size)) where spf is scipy.fftpack and dotMultiply(img) uses scipy.multiply(self.image, image) f = Image.fromMatrix([[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]) lena = Image.fromFile("Test/images/lena.jpg") print lena.image.shape lenaf = lena.toFrequencyDomain(lena.image.shape) ff = f.toFrequencyDomain(lena.image.shape) lenafm = lenaf.dotMultiplyImage(ff) lenaff = lenafm.toTimeDomain() lena.display() lenaff.display() So, the previous code works pretty well, if I told OpenCV to load the image via GRAY_SCALE. However, if I let the image to be loaded in color ... lena.image.shape will be (512, 512, 3) .. so, it gives me an error when using scipy.fttpack.ftt2 saying "When given, Shape and Axes should be of same length". What I tried next was converted my filter to 3-D .. as [[[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]], [[1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0], [1/9.0, 1/9.0, 1/9.0]]] And, not knowing what the axes argument do, I added it with random numbers as (-2, -1, -1), (-1, -1, -2), .. etc. until it gave me the correct filter output shape for the dotMultiply to work. But, of course it wasn't the correct value. Things were totally worse. My final trial, was using fft2 function on each of the components 2-D matrices, and then re-making the 3-D one, using the following code. # Spiltting the 3-D matrix to three 2-D matrices. for i, row in enumerate(self.image): r.append(list()) g.append(list()) b.append(list()) for pixel in row: r[i].append(pixel[0]) g[i].append(pixel[1]) b[i].append(pixel[2]) rfft = spf.fftshift(spf.fft2(r, size)) gfft = spf.fftshift(spf.fft2(g, size)) bfft = spf.fftshift(spf.fft2(b, size)) newImage.image = sp.asarray([[[rfft[i][j], gfft[i][j], bfft[i][j]] for j in xrange(len(rfft[i]))] for i in xrange(len(rfft))] ) return newImage Any help on what I made wrong, or how can I achieve that for both GreyScale and Coloured pictures.

    Read the article

  • Sparse linear program solver

    - by Jacob
    This great SO answer points to a good sparse solver, but I've got constraints on x (for Ax = b) such that each element in x is >=0 an <=N. The first thing which comes to mind is an LP solver for large sparse matrices. Any ideas/recommendations?

    Read the article

  • deteminant of matrix

    - by davit-datuashvili
    suppose there is given two dimensional array int a[][]=new int[4][4]; i am trying to find determinant of matrices please help i know how find it mathematical but i am trying to find it in programaticaly

    Read the article

  • Why use one dimensional array instead of a two dimensional arrray?

    - by user3869145
    I was doing some work handling a lot of information and my partner told me that I was using too many matrices to manipulate the variables of the problem. The idea was to use one dimension arrays int a[] instead of the 2 dimensional arrays int b[][], to save memory and processing speed of the algorithm. How certain is that this change will accelerate the speed of execution or compilation of my code in c ++?

    Read the article

  • XNA: Rotating Bones

    - by MLM
    XNA 4.0 I am trying to learn how to rotate bones on a very simple tank model I made in Cinema 4D. It is rigged by 3 bones, Root - Main - Turret - Barrel I have binded all of the objects to the bones so that all translations/rotations work as planned in C4D. I exported it as .fbx I based my test project after: http://create.msdn.com/en-US/education/catalog/sample/simple_animation I can build successfully with no errors but all the rotations I try to do to my bones have no effect. I can transform my Root successfully using below but the bone transforms have no effect: myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; I am wondering if my model is just not right or something important I am missing in the code. Here is my Game1.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; float aspectRatio; Tank myModel; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here myModel = new Tank(); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here myModel.Load(Content); aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here float time = (float)gameTime.TotalGameTime.TotalSeconds; // Move the pieces /* myModel.TurretRotation = (float)Math.Sin(time * 0.333f) * 1.25f; myModel.BarrelRotation = (float)Math.Sin(time * 0.25f) * 0.333f - 0.333f; */ base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // Calculate the camera matrices. float time = (float)gameTime.TotalGameTime.TotalSeconds; Matrix rotation = Matrix.CreateRotationY(MathHelper.ToRadians(45)); Matrix view = Matrix.CreateLookAt(new Vector3(2000, 500, 0), new Vector3(0, 150, 0), Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.GraphicsDevice.Viewport.AspectRatio, 10, 10000); // TODO: Add your drawing code here myModel.Draw(rotation, view, projection); base.Draw(gameTime); } } } And here is my tank class: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { public class Tank { Model myModel; // Array holding all the bone transform matrices for the entire model. // We could just allocate this locally inside the Draw method, but it // is more efficient to reuse a single array, as this avoids creating // unnecessary garbage. public Matrix[] boneTransforms; // Shortcut references to the bones that we are going to animate. // We could just look these up inside the Draw method, but it is more // efficient to do the lookups while loading and cache the results. ModelBone MainBone; ModelBone TurretBone; ModelBone BarrelBone; // Store the original transform matrix for each animating bone. Matrix MainTransform; Matrix TurretTransform; Matrix BarrelTransform; // current animation positions float turretRotationValue; float barrelRotationValue; /// <summary> /// Gets or sets the turret rotation amount. /// </summary> public float TurretRotation { get { return turretRotationValue; } set { turretRotationValue = value; } } /// <summary> /// Gets or sets the barrel rotation amount. /// </summary> public float BarrelRotation { get { return barrelRotationValue; } set { barrelRotationValue = value; } } /// <summary> /// Load the model /// </summary> public void Load(ContentManager Content) { // TODO: use this.Content to load your game content here myModel = Content.Load<Model>("Models\\simple_tank02"); MainBone = myModel.Bones["Main"]; TurretBone = myModel.Bones["Turret"]; BarrelBone = myModel.Bones["Barrel"]; MainTransform = MainBone.Transform; TurretTransform = TurretBone.Transform; BarrelTransform = BarrelBone.Transform; // Allocate the transform matrix array. boneTransforms = new Matrix[myModel.Bones.Count]; } public void Draw(Matrix world, Matrix view, Matrix projection) { myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; myModel.CopyAbsoluteBoneTransformsTo(boneTransforms); // Draw the model, a model can have multiple meshes, so loop foreach (ModelMesh mesh in myModel.Meshes) { // This is where the mesh orientation is set foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } // Draw the mesh, will use the effects set above mesh.Draw(); } } } }

    Read the article

  • projection / view matrix: the object is bigger than it should and depth does not affect vertices

    - by Francesco Noferi
    I'm currently trying to write a C 3D software rendering engine from scratch just for fun and to have an insight on what OpenGL does behind the scene and what 90's programmers had to do on DOS. I have written my own matrix library and tested it without noticing any issues, but when I tried projecting the vertices of a simple 2x2 cube at 0,0 as seen by a basic camera at 0,0,10, the cube seems to appear way bigger than the application's window. If I scale the vertices' coordinates down by 8 times I can see a proper cube centered on the screen. This cube doesn't seem to be in perspective: wheen seen from the front, the back vertices pe rfectly overlap with the front ones, so I'm quite sure it's not correct. this is how I create the view and projection matrices (vec4_initd initializes the vectors with w=0, vec4_initw initializes the vectors with w=1): void mat4_lookatlh(mat4 *m, const vec4 *pos, const vec4 *target, const vec4 *updirection) { vec4 fwd, right, up; // fwd = norm(pos - target) fwd = *target; vec4_sub(&fwd, pos); vec4_norm(&fwd); // right = norm(cross(updirection, fwd)) vec4_cross(updirection, &fwd, &right); vec4_norm(&right); // up = cross(right, forward) vec4_cross(&fwd, &right, &up); // orientation and translation matrices combined vec4_initd(&m->a, right.x, up.x, fwd.x); vec4_initd(&m->b, right.y, up.y, fwd.y); vec4_initd(&m->c, right.z, up.z, fwd.z); vec4_initw(&m->d, -vec4_dot(&right, pos), -vec4_dot(&up, pos), -vec4_dot(&fwd, pos)); } void mat4_perspectivefovrh(mat4 *m, float fovdegrees, float aspectratio, float near, float far) { float h = 1.f / tanf(ftoradians(fovdegrees / 2.f)); float w = h / aspectratio; vec4_initd(&m->a, w, 0.f, 0.f); vec4_initd(&m->b, 0.f, h, 0.f); vec4_initw(&m->c, 0.f, 0.f, -far / (near - far)); vec4_initd(&m->d, 0.f, 0.f, (near * far) / (near - far)); } this is how I project my vertices: void device_project(device *d, const vec4 *coord, const mat4 *transform, int *projx, int *projy) { vec4 result; mat4_mul(transform, coord, &result); *projx = result.x * d->w + d->w / 2; *projy = result.y * d->h + d->h / 2; } void device_rendervertices(device *d, const camera *camera, const mesh meshes[], int nmeshes, const rgba *color) { int i, j; mat4 view, projection, world, transform, projview; mat4 translation, rotx, roty, rotz, transrotz, transrotzy; int projx, projy; // vec4_unity = (0.f, 1.f, 0.f, 0.f) mat4_lookatlh(&view, &camera->pos, &camera->target, &vec4_unity); mat4_perspectivefovrh(&projection, 45.f, (float)d->w / (float)d->h, 0.1f, 1.f); for (i = 0; i < nmeshes; i++) { // world matrix = translation * rotz * roty * rotx mat4_translatev(&translation, meshes[i].pos); mat4_rotatex(&rotx, ftoradians(meshes[i].rotx)); mat4_rotatey(&roty, ftoradians(meshes[i].roty)); mat4_rotatez(&rotz, ftoradians(meshes[i].rotz)); mat4_mulm(&translation, &rotz, &transrotz); // transrotz = translation * rotz mat4_mulm(&transrotz, &roty, &transrotzy); // transrotzy = transrotz * roty = translation * rotz * roty mat4_mulm(&transrotzy, &rotx, &world); // world = transrotzy * rotx = translation * rotz * roty * rotx // transform matrix mat4_mulm(&projection, &view, &projview); // projview = projection * view mat4_mulm(&projview, &world, &transform); // transform = projview * world = projection * view * world for (j = 0; j < meshes[i].nvertices; j++) { device_project(d, &meshes[i].vertices[j], &transform, &projx, &projy); device_putpixel(d, projx, projy, color); } } } this is how the cube and camera are initialized: // test mesh cube = &meshlist[0]; mesh_init(cube, "Cube", 8); cube->rotx = 0.f; cube->roty = 0.f; cube->rotz = 0.f; vec4_initw(&cube->pos, 0.f, 0.f, 0.f); vec4_initw(&cube->vertices[0], -1.f, 1.f, 1.f); vec4_initw(&cube->vertices[1], 1.f, 1.f, 1.f); vec4_initw(&cube->vertices[2], -1.f, -1.f, 1.f); vec4_initw(&cube->vertices[3], -1.f, -1.f, -1.f); vec4_initw(&cube->vertices[4], -1.f, 1.f, -1.f); vec4_initw(&cube->vertices[5], 1.f, 1.f, -1.f); vec4_initw(&cube->vertices[6], 1.f, -1.f, 1.f); vec4_initw(&cube->vertices[7], 1.f, -1.f, -1.f); // main camera vec4_initw(&maincamera.pos, 0.f, 0.f, 10.f); maincamera.target = vec4_zerow; and, just to be sure, this is how I compute matrix multiplications: void mat4_mul(const mat4 *m, const vec4 *va, vec4 *vb) { vb->x = m->a.x * va->x + m->b.x * va->y + m->c.x * va->z + m->d.x * va->w; vb->y = m->a.y * va->x + m->b.y * va->y + m->c.y * va->z + m->d.y * va->w; vb->z = m->a.z * va->x + m->b.z * va->y + m->c.z * va->z + m->d.z * va->w; vb->w = m->a.w * va->x + m->b.w * va->y + m->c.w * va->z + m->d.w * va->w; } void mat4_mulm(const mat4 *ma, const mat4 *mb, mat4 *mc) { mat4_mul(ma, &mb->a, &mc->a); mat4_mul(ma, &mb->b, &mc->b); mat4_mul(ma, &mb->c, &mc->c); mat4_mul(ma, &mb->d, &mc->d); }

    Read the article

  • Is there a way to communicate DBMS with raw memory block or binaries

    - by darkcminor
    I am trying to communicate a numerical matrix operations library like LAPACK with any DBMS. Is it possible to send/receive complete matrices as binary or as a direct memory pointers to process them (it will be something like: The Outside library processes data stored in DBMS, then it computes some huge matrix stuff and then via memory block or a binary DBMS get the result from library)? The main purpose is speed and avoid passing through a flat file, and last but not least, use library toefficiently do some operations DBMS are not designed to. * Is it possible that Oracle, SQL Server, MySQL support this technique?.

    Read the article

  • How do I insert a newline within a long formula using OpenOffice.org Math?

    - by Lekensteyn
    I've the following formula: (x^6+x^4+x^2+x+1)(x^7+x+1) = x^13+x^11+x^9+x^8+x^7+ x^7+x^5+x^3+x^3+x^2+x+ x^6+x^4+x^2+x+1 = x^13+x^11+x^9+x^8+2x^7+x^6+x^5+x^4+2x^3+2x^2+2x+1 Putting this in OpenOffice.org Math causes every line be concatenated, which I want to avoid. I've already tried putting newline between the lines, but it adds a strange question mark in the formula. Using matrices did not work for me either. I want to achieve a nicely formatted formula like this one (taken from FIPS 197 pdf):

    Read the article

  • error X3501: 'main': entrypoint not found

    - by Pasha
    I am trying to learn DX10 by following this tutorial. However, my shader won't compile. Below is the detailed error message. Build started 9/10/2012 10:22:46 PM. 1>Project "D:\code\dx\Engine\Engine\Engine.vcxproj" on node 2 (Build target(s)). C:\Program Files (x86)\Windows Kits\8.0\bin\x86\fxc.exe /nologo /E"main" /Fo "D:\code\dx\Engine\Debug\color.cso" /Od /Zi color.fx 1>FXC : error X3501: 'main': entrypoint not found compilation failed; no code produced 1>Done Building Project "D:\code\dx\Engine\Engine\Engine.vcxproj" (Build target(s)) -- FAILED. Build FAILED. Time Elapsed 00:00:00.05 I can easily compile the downloaded code, but I want to know how to fix this error myself. My color.fx looks like this //////////////////////////////////////////////////////////////////////////////// // Filename: color.fx //////////////////////////////////////////////////////////////////////////////// ///////////// // GLOBALS // ///////////// matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; ////////////// // TYPEDEFS // ////////////// struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Vertex Shader //////////////////////////////////////////////////////////////////////////////// PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; } //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 ColorPixelShader(PixelInputType input) : SV_Target { return input.color; } //////////////////////////////////////////////////////////////////////////////// // Technique //////////////////////////////////////////////////////////////////////////////// technique10 ColorTechnique { pass pass0 { SetVertexShader(CompileShader(vs_4_0, ColorVertexShader())); SetPixelShader(CompileShader(ps_4_0, ColorPixelShader())); SetGeometryShader(NULL); } }

    Read the article

  • Convert OpenGL code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run into some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it right before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has different coordinations systems, and functions that alters how you must implement to code a bit. The getPickRay function on the answer linked is what I'm trying to convert. This is the part of my code that I think is giving me trouble when converting from openGl to directX Because I'm unsure on how their different coordination systems differs from one another. PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); Another thing that I am unsure about is this part: XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom right (800,600) ( or whatever resolution you would have) The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH. I removed the variables aspectRatio and zoomFactor because I assumed that they were related to some specific function of his game. To summarize it up to questions : Does the openGL coordination system differ in such a way that this equation in the first of my code examples wouldn't be valid when used in DirectX 11 ( with its respective screen coordination system)? Is the openGL method Matrix4f.transform(a, b, c) equal to the directX method c = XMVector3TransformCoord(b,a)? (where a is a matrix and b,c are vectors). Because I know when it comes to matrices order is important.

    Read the article

  • DirectX Unproject troubles

    - by pivotnig
    I have an orthographic projection and I try to unproject a point from screen space. Following are the view and projection matrices: var w2 = ScreenWidthInPixels/2; var h2 = ScreenHeightInPixels/2; view = Matrix.LookAtLH(new Vector3(0, 0, -1), new Vector3(0, 0, 0), Vector3.UnitY); proj = Matrix.OrthoOffCenterLH(-w2, w2, -h2, h2, 0.1f, 10f); Here is how I unproject a Point p, the point is given in screen pixels: var m = Vector3.Unproject(p, 0, 0, ScreenWidthInPixels, ScreenHeightInPixels, 0.1f, 10f, // znear and zfar view *proj); My code doesn't work, the matrix m contains only Nan. When I try to invert view * proj I get back a Matrix with only zeros. So I suspect my problem has something to do with the orthographic projection matrix. Here are my questions: Could the problem be caused by an underflow due to the large values in the OrthoOffCenterLH projection? What parameters do I have to pass for x,y,width,height in Unproject(...)? What significance has the minZ and maxZ parameter in Unproject(...)? Does it matter what I pass for p.Z in Unproject(...)?

    Read the article

  • Should we always prefer OpenGL ES version 2 over version 1.x

    - by Shivan Dragon
    OpengGL ES version 2 goes a long way into changing the development paradigm that was established with OpenGL ES 1.x. You have shaders which you can chain together to apply varios effects/transforms to your elements, the projection and transformation matrices work completly different etc. I've seen a lot of online tutorials and blogs that simply say "ditch version 1.x, use version 2, that's the way to go". Even on Android's documentation it sais to "use version 2 as it may prove faster than 1.x". Now, I've also read a book on OpenGL ES (which was rather good, but I'm not gonna mention here because I don't want to give the impression that I'm trying to make hidden publicity). The guy there treated only OpenGL ES 1.x for 80% of the book, and then at the end only listed the differences in version 2 and said something like "if OpenGL ES 1 does what you need, there's no need to switch to version 2, as it's only gonna over complicate your code. Version 2 was changed a lot to facillitate newer, fancier stuff, but if you don't need it, version 1.x is fine". My question is then, is the last statement right? Should I always use Open GL ES version 1.x if I don't need version 2 only stuff? I'd sure like to do that, because I find coding in version 1.x A LOT simpler than version 2 but I'm afraid that my apps might get obsolete faster for using an older version.

    Read the article

  • OpenGL - Stack overflow if I do, Stack underflow if I don't!

    - by Wayne Werner
    Hi, I'm in a multimedia class in college, and we're "learning" OpenGL as part of the class. I'm trying to figure out how the OpenGL camera vs. modelview works, and so I found this example. I'm trying to port the example to Python using the OpenGL bindings - it starts up OpenGL much faster, so for testing purposes it's a lot nicer - but I keep running into a stack overflow error with the glPushMatrix in this code: def cube(): for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); According to this reference, that happens when the matrix stack is full. So I thought, "well, if it's full, let me just pop the matrix off the top of the stack, and there will be room". I modified the code to: def cube(): glPopMatrix() for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); And now I get a buffer underflow error - which apparently happens when the stack has only one matrix. So am I just waaay off base in my understanding? Or is there some way to increase the matrix stack size? Also, if anyone has some good (online) references (examples, etc.) for understanding how the camera/model matrices work together, I would sincerely appreciate them! Thanks!

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • Quaternion Camera

    - by Alex_Hyzer_Kenoyer
    Can someone help me figure out how to use a Quaternion with the PerspectiveCamera in libGDX or in general? I am trying to rotate my camera around a sphere that is being drawn at (0,0,0). I am not sure how to go about setting up the quaternion correctly, manipulating it, and then applying it to the camera. Edit: Here is what I have tried to do so far. // This is how I set it up Quaternion orientation = new Quaternion(); orientation.setFromAxis(Vector3.Y, 45); // This is how I am trying to update the rotations public void rotateX(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.X, amount); orientation.mul(temp); } public void rotateY(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.Y, amount); orientation.mul(temp); } public void updateCamera() { // This is where I am unsure how to apply the rotations to the camera // I think I should update the view and projection matrices? camera.view.mul(orientation); ... }

    Read the article

  • Extreme Optimization Numerical Libraries for .NET – Part 1 of n

    - by JoshReuben
    While many of my colleagues are fascinated in constructing the ultimate ViewModel or ServiceBus, I feel that this kind of plumbing code is re-invented far too many times – at some point in the near future, it will be out of the box standard infra. How many times have you been to a customer site and built a different variation of the same kind of code frameworks? How many times can you abstract Prism or reliable and discoverable WCF communication? As the bar is raised for whats bundled with the framework and more tasks become declarative, automated and configurable, Information Systems will expose a higher level of abstraction, forcing software engineers to focus on more advanced computer science and algorithmic tasks. I've spent the better half of the past decade building skills in .NET and expanding my mathematical horizons by working through the Schaums guides. In this series I am going to examine how these skillsets come together in the implementation provided by ExtremeOptimization. Download the trial version here: http://www.extremeoptimization.com/downloads.aspx Overview The library implements a set of algorithms for: linear algebra, complex numbers, numerical integration and differentiation, solving equations, optimization, random numbers, regression, ANOVA, statistical distributions, hypothesis tests. EONumLib combines three libraries in one - organized in a consistent namespace hierarchy. Mathematics Library - Extreme.Mathematics namespace Vector and Matrix Library - Extreme.Mathematics.LinearAlgebra namespace Statistics Library - Extreme.Statistics namespace System Requirements -.NET framework 4.0  Mathematics Library The classes are organized into the following namespace hierarchy: Extreme.Mathematics – common data types, exception types, and delegates. Extreme.Mathematics.Calculus - numerical integration and differentiation of functions. Extreme.Mathematics.Curves - points, lines and curves, including polynomials and Chebyshev approximations. curve fitting and interpolation. Extreme.Mathematics.Generic - generic arithmetic & linear algebra. Extreme.Mathematics.EquationSolvers - root finding algorithms. Extreme.Mathematics.LinearAlgebra - vectors , matrices , matrix decompositions, solvers for simultaneous linear equations and least squares. Extreme.Mathematics.Optimization – multi-d function optimization + linear programming. Extreme.Mathematics.SignalProcessing - one and two-dimensional discrete Fourier transforms. Extreme.Mathematics.SpecialFunctions

    Read the article

  • apply non-hierarchial transforms to hierarchial skeleton?

    - by user975135
    I use Blender3D, but the answer might not API-exclusive. I have some matrices I need to assign to PoseBones. The resulting pose looks fine when there is no bone hierarchy (parenting) and messed up when there is. I've uploaded an archive with sample blend of the rigged models, text animation importer and a test animation file here: http://www.2shared.com/file/5qUjmnIs/sample_files.html Import the animation by selecting an Armature and running the importer on "sba" file. Do this for both Armatures. This is how I assign the poses in the real (complex) importer: matrix_bases = ... # matrix from file animation_matrix = matrix_basis * pose.bones['mybone'].matrix.copy() pose.bones[bonename].matrix = animation_matrix If I go to edit mode, select all bones and press Alt+P to undo parenting, the Pose looks fine again. The API documentation says the PoseBone.matrix is in "object space", but it seems clear to me from these tests that they are relative to parent bones. Final 4x4 matrix after constraints and drivers are applied (object space) I tried doing something like this: matrix_basis = ... # matrix from file animation_matrix = matrix_basis * (pose.bones['mybone'].matrix.copy() * pose.bones[bonename].bone.parent.matrix_local.copy().inverted()) pose.bones[bonename].matrix = animation_matrix But it looks worse. Experimented with order of operations, no luck with all. For the record, in the old 2.4 API this worked like a charm: matrix_basis = ... # matrix from file animation_matrix = armature.bones['mybone'].matrix['ARMATURESPACE'].copy() * matrix_basis pose.bones[bonename].poseMatrix = animation_matrix pose.update() Link to Blender API ref: http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.BlendData.html#bpy.types.BlendData http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.PoseBone.html#bpy.types.PoseBone

    Read the article

  • Learning to code first game, few questions on basic game development and 3D

    - by ProgrammerByDay
    I've been programming for a while, and I'm concurrently learning how to make a basic game and slimdx, and wanted to talk to someone to hopefully get a few pointers. I've read that Tetris is the "Hello, world" of game programming, which made sense to me, so I decided to give it a shot. I've been able to code up a basic version in a few hours, which I'm quite happy with, but I had a few questions about 3D programming. Right now I'm using Direct3D to do display the blocks without any textures (just colored squares). I have a data structure (2d array of bytes, where each byte denotes the presence of a block and its color) which is the "game board," and on every render() call I create a new vertex buffer of the existing squares in the game board, and draw those primitives. This feels very inefficient, and I wondering what would be the idiomatic way of doing this in a 3D world, with matrix/rotations/translation operations. I know 3D is overkill for such a project, but I want to learn any 3d concepts that I can while I'm doing it. I understand that what you'd usually want to do is keep the same vertices/vertex buffers but manipulate them with matrices to achieve rotations/translations, etc. To do so, I assume what would happen is I'd have one vertex buffer for the "active" piece, since that'll be constantly rotated and moved, and have one vertex buffer for the frozen pieces on the bottom of the board, which is pretty much stationary, but will need to be changed/recreated when the active piece becomes frozen. Right now I'm just clearing and redrawing on every render call, which seems like the easiest way to do things, although I wonder if there's a more efficient way to deal with changes. Obviously there are a lot of questions I'm asking here, but if you can even just point me a step or two ahead in terms of how I should be thinking, it'd be great. Thanks

    Read the article

  • File format for animated scene

    - by stephelton
    I've got a custom OpenGL based rendering engine and I'd like to add support for cinema-type scene animation. The artist that is helping me uses primarily 3DSMax. I'd like a file format for exporting and importing this data. I'm also in need of a file format for skeletal animation data, which may have an impact here. I've been looking at MAXScript to manually export this stuff, which would buy me the most flexibility, but I have virtually no experience with 3DSMax itself, so I get a little lost when it comes to terminology. So I'd like to know what file formats exist for animated scene data, and whether they are appropriate for my use (my fear is that they will be way too broad for my fairly simple needs.) The way I view animated scene data is basically a bunch of references to [animated] models with keyframe-based matrices describing their orientation over time. And probably some special camera stuff to handle perspective. I might also want some event type stuff for adding/removing objects. Is this a sane concept?

    Read the article

  • How to make an object stay relative to another object

    - by Nick
    In the following example there is a guy and a boat. They have both a position, orientation and velocity. The guy is standing on the shore and would like to board. He changes his position so he is now standing on the boat. The boat changes velocity and orientation and heads off. My character however has a velocity of 0,0,0 but I would like him to stay onboard. When I move my character around, I would like to move as if the boat was the ground I was standing on. How do keep my character aligned properly with the boat? It is exactly like in World Of Warcraft, when you board a boat or zeppelin. This is my physics code for the guy and boat: this.velocity.addSelf(acceleration.multiplyScalar(dTime)); this.position.addSelf(this.velocity.clone().multiplyScalar(dTime)); The guy already has a reference to the boat he's standing on, and thus knows the boat's position, velocity, orientation (even matrices or quaternions can be used).

    Read the article

  • XNA - Finding boundaries in isometric tilemap

    - by Yheeky
    I have an issue with my 2D isometric engine. I'm using my own 2D camera class which works with matrices and need to find the tilemaps boundaries so the user always sees the map. Currently my map size is 100x100 (with 128x128 tiles) so the calculation (e.g. for the right boundary) is: var maxX = (TileMap.MapWidth + 1) * (TileMap.TileWidth / 2) - ViewSize.X; var maxX = (100 + 1) * (128 / 2) - 1360; // = 5104 pixels. This works fine while having scale factor of 1.0f but not for any other zoom factor. When I zoom out to 0.9f the right border should be at approx. 4954. I´m using the following code for transformation but I always get a wrong value: var maxXVector = new Vector2(maxX, 0); var maxXTransformed = Vector2.Transform(maxXVector, tempTransform).X; The result is 4593. Does anyone of you have an idea what I´m during wrong? Thanks for your help! Yheeky

    Read the article

  • Frustum culling with third person camera

    - by Christian Frantz
    I have a third person camera that contains two matrices: view and projection, and two Vector3's: camPosition and camTarget. I've read up on frustum culling and it makes it seem easy enough for a first person camera, but how would I implement this for a third person camera? I need to take into effect the objects I can see behind me too. How would I implement this into my camera class so it runs at the same time as my update method? public void CameraUpdate(Matrix objectToFollow) { camPosition = objectToFollow.Translation + (objectToFollow.Backward *backward) + (objectToFollow.Up * up); camTarget = objectToFollow.Translation; view = Matrix.CreateLookAt(camPosition, camTarget, Vector3.Up); } Can I just create another method within the class which creates a bounding sphere with a value from my camera and then uses the culling based on that? And if so, which value am I using to create the bounding sphere from? After this is implemented, I'm planning on using occlusion culling for the faces of my objects adjacent to other objects. Will using just one or the other make a difference? Or will both of them be better? I'm trying to keep my framerate as high as possible

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >