Search Results

Search found 359 results on 15 pages for 'matrices'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • Where is the bottleneck in this code?

    - by Mikhail
    I have the following tight loop that makes up the serial bottle neck of my code. Ideally I would parallelize the function that calls this but that is not possible. //n is about 60 for (int k = 0;k < n;k++) { double fone = z[k*n+i+1]; double fzer = z[k*n+i]; z[k*n+i+1]= s*fzer+c*fone; z[k*n+i] = c*fzer-s*fone; } Are there any optimizations that can be made such as vectorization or some evil inline that can help this code? I am looking into finding eigen solutions of tridiagonal matrices. http://www.cimat.mx/~posada/OptDoglegGraph/DocLogisticDogleg/projects/adjustedrecipes/tqli.cpp.html

    Read the article

  • k-means clustering in R on very large, sparse matrix?

    - by movingabout
    Hello, I am trying to do some k-means clustering on a very large matrix. The matrix is approximately 500000 rows x 4000 cols yet very sparse (only a couple of "1" values per row). The whole thing does not fit into memory, so I converted it into a sparse ARFF file. But R obviously can't read the sparse ARFF file format. I also have the data as a plain CSV file. Is there any package available in R for loading such sparse matrices efficiently? I'd then use the regular k-means algorithm from the cluster package to proceed. Many thanks

    Read the article

  • .NET Ascertaining mouse is on line drawn between two arbitrary points

    - by johnc
    I have an arrow drawn between two objects on a Winform. What would be the simplest way to determine that my mouse is currently hovering over, or near, this line. I have considered testing whether the mouse point intersects a square defined and extrapolated by the two points, however this would only be feasible if the two points had very similar x or y values. I am thinking, also, this problem is probably more in the realms of linear algebra rather than simple trigonometry, and whilst I do remember the simpler aspects of matrices, this problem is beyond my knowledge of linear algebra. On the other hand, if a .NET library can cope with the function, even better.

    Read the article

  • Building a world matrix

    - by DeadMG
    When building a world projection matrix from scale, rotate, translate matrices, then the translation matrix must be the last in the process, right? Else you'll be scaling or rotating your translations. Do scale and rotate need to go in a specific order? Right now I've got std::for_each(objects.begin(), objects.end(), [&, this](D3D93DObject* ptr) { D3DXMATRIX WVP; D3DXMATRIX translation, rotationX, rotationY, rotationZ, scale; D3DXMatrixTranslation(&translation, ptr->position.x, ptr->position.y, ptr->position.z); D3DXMatrixRotationX(&rotationX, ptr->rotation.x); D3DXMatrixRotationY(&rotationY, ptr->rotation.y); D3DXMatrixRotationZ(&rotationZ, ptr->rotation.z); D3DXMatrixScaling(&translation, ptr->scale.x, ptr->scale.y, ptr->scale.z); WVP = rotationX * rotationY * rotationZ * scale * translation * ViewProjectionMatrix; });

    Read the article

  • Sorting in Matlab

    - by smichak
    Hi, I would like to sort elements in a comma-separated list. The elements in the list are structs and I would like the list to be sorted according to one of the fields in the struct. For example, given the following code: L = {struct('obs', [1 2 3 4], 'n', 4), struct('obs', [6 7 5 3], 'n', 2)}; I would want to have a way to sort L by the field 'n'. Matlab's sort function only works on matrices or arrays and on lists of strings (not even lists of numbers). Any ideas on how that may be achieved? Thanks, Micha

    Read the article

  • Storing an arbitrary R object onto HDD?

    - by Harokitty
    I understand that we can export data matrices to csv or xlsx files. What about complex objects like lm? For example, in my work I might have a list of length 1000, each with a single lm() object. Each time I load R I have to wait a long time to populate the 1000 length list with these lm objects with a for loop or a lapply. I would rather just save the list somewhere on my HDD at the end of a session and open it at the start of the next session.

    Read the article

  • How to determine the radius and center of a circle when only three noncollinear points are known?

    - by Bob
    I'm working on a C# program that deals with Oracle Spatial geometry. When circle data is stored in a geometry field only three non-collinear points are stored to represent the circle. The problem is that I need to use this data on a Google Maps web page and need the center point and radius of the circle (since my circle drawing function uses that information). Can anyone help with the math involved and translating said math to C#? I think this page may hold the answer, but I'm having a hard time following it. There are formulas for radius and center given three points, but then they define the variables as matrices and I get lost at that point. How would I solve that in code?

    Read the article

  • For each element A[i] of array A, find the closest j such that A[j] > A[i]

    - by SamH
    Hi everyone. Given : An array A[1..n] of real numbers. Goal : An array D[1..n] such that D[i] = min{ distance(i,j) : A[j] > A[i] } or some default value (like 0) when there is no higher-valued element. I would really like to use Euclidean distance here. Example : A = [-1.35, 3.03, 0.73, -0.06, 0.71, -0.21, -0.12, 1.49, 1.41, 1.42] D = [1, 0, 1, 1, 2, 1, 1, 6, 1, 2] Is there any way to beat the obvious O(n^2) solution? The only progress I've made so far is that D[i] = 1 whenever A[i] is not a local maxima. I've been thinking a lot and have come up with NOTHING. I hope to eventually extend this to 2D (so A and D are matrices).

    Read the article

  • Basic C++ code for multiplication of 2 matrix or vectors (C++ beginner)

    - by Ice
    I am a new C++ user and I am also doing a major in Maths so thought I would try implement a simple calculator. I got some code off the internet and now I just need help to multiply elements of 2 matrices or vectors. Matrixf multiply(Matrixf const& left, Matrixf const& right) { // error check if (left.ncols() != right.nrows()) { throw std::runtime_error("Unable to multiply: matrix dimensions not agree."); } /* I have all the other part of the code for matrix*/ /** Now I am not sure how to implement multiplication of vector or matrix.**/ Matrixf ret(1, 1); return ret; }

    Read the article

  • Conditional Counting in Mathematica

    - by 500
    Considering the following list : dalist = {{1, a, 1}, {2, s, 0}, {1, d, 0}, {2, f, 0}, {1, g, 1}} I would like to count the number of times a certain value in the first column takes a certain value in column 3. So in this example my desired output would be: {{1,1,2}, {1,0,1}, {2,1,0}, {2,0,2}} Where the latest sublist {2,0,2} being read as: When the value is 2 in the first column, a corresponding value (same row in matrices world) in column 3 of 0 is present twice. I hope this is not to confusing. I added the second Column to convey the fact that the columns are distant to each other. If possible, no reordering should happen. EDIT : {1,2,3,4,5} {1,0} are the exact values taken by the columns I am actually dealing with in my data. I know I am missing the correct description. Please edit if you can and know it. Thank you

    Read the article

  • Why do we need normalized coordinate system? Options

    - by jcyang
    Hi, I have problem understand following sentences in my textbook Computer Graphics with OpenGL. "To make viewing process independent of the requirements of any output device,graphic system convert object descriptions to normalized coordinates and apply the clipping routines." Why normalized coordinates could make viewing process independent of the requirements of any output devices? Isn't the projection coordinates already independent of output device?We only need to first scale and then translate the projection coordinate then we will get device coordinate. So why do we need first convert the projection coordinate to normalized coordinate first? "Clipping is usually performed in normlized coordinates.This allows us to reduce computations by first concatenating the various transformation matrices" Why clipping is usually performed in normlized coordinates? What kind of transformation concatenated? thanks. jcyang.

    Read the article

  • How to roeder the rows of one matrix with respect to the other matrix?

    - by user2806363
    I have two big matrices A and B with diffrent dimensions.I want to order the rows of matrix B with respect to rows of the matrix A. and add the rows with values 0 to matrix B, if that row is not exist in B but in A Here is the reproduceable example and expected output: A<-matrix(c(1:40), ncol=8) rownames(A)<-c("B", "A", "C", "D", "E") > A [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] B 1 6 11 16 21 26 31 36 A 2 7 12 17 22 27 32 37 C 3 8 13 18 23 28 33 38 D 4 9 14 19 24 29 34 39 E 5 10 15 20 25 30 35 40 > B<-matrix(c(100:108),ncol=3) rownames(B)<-c("A", "E", "C") > B [,1] [,2] [,3] A 100 103 106 E 101 104 107 C 102 105 108 Here is the Expected output : >B [,1] [,2] [,3] B 0 0 0 A 100 103 106 C 102 105 108 D 0 0 0 E 101 104 107 > Would someone help me to implement this in R ?

    Read the article

  • Inflector for .NET

    - by srkirkland
    I was writing conventions for FluentNHibernate the other day and I ran into the need to pluralize a given string and immediately thought of the ruby on rails Inflector.  It turns out there is a .NET library out there also capable of doing word inflection, originally written (I believe) by Andrew Peters, though the link I had no longer works.  The entire Inflector class is only a little over 200 lines long and can be easily included into any project, and contains the Pluralize() method along with a few other helpful methods (like Singularize(), Camelize(), Capitalize(), etc). The Inflector class is available in its entirety from my github repository https://github.com/srkirkland/Inflector.  In addition to the Inflector.cs class I added tests for every single method available so you can gain an understanding of what each method does.  Also, if you are wondering about a specific test case feel free to fork my project and add your own test cases to ensure Inflector does what you expect. Here is an example of some test cases for pluralize: TestData.Add("quiz", "quizzes"); TestData.Add("perspective", "perspectives"); TestData.Add("ox", "oxen"); TestData.Add("buffalo", "buffaloes"); TestData.Add("tomato", "tomatoes"); TestData.Add("dwarf", "dwarves"); TestData.Add("elf", "elves"); TestData.Add("mouse", "mice");   TestData.Add("octopus", "octopi"); TestData.Add("vertex", "vertices"); TestData.Add("matrix", "matrices");   TestData.Add("rice", "rice"); TestData.Add("shoe", "shoes"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Pretty smart stuff.

    Read the article

  • April 2010 Critical Patch Update Released

    - by eric.maurice
    Hi, this is Eric Maurice. Today Oracle released the April 2010 Critical Patch Update (CPUApr2010),the first one to include security fixes for Oracle Solaris. Today's Critical Patch Update (CPU) provides 47 new security fixes across the following product families: Oracle Database Server, Oracle Fusion Middleware, Oracle Collaboration Suite, Oracle E-Business Suite, Oracle PeopleSoft Enterprise, Oracle Life Sciences, Retail, and Communications Industry Suites, and Oracle Solaris. 28 of these 47 new vulnerabilities are remotely exploitable without authentication, but the criticality of the affected components and the severity of these vulnerabilities vary greatly. Customers should, as usual, refer to the Risk Matrices in the CPU Advisory to assess the relevance of these fixes for their environment (and the urgency with which to apply the fixes). 7 of the 47 new vulnerabilities affect various versions of Oracle Database Server. None of these 7 vulnerabilities are remotely exploitable without authentication. Furthermore, none of these fixes are applicable to client-only deployments. The most severe CVSS Base Score for the Database Server vulnerabilities is 7.1. As a reminder, information about Oracle's use of the CVSS 2.0 standard can be found in Note 394487.1 (My Oracle Support subscription required). Note that this Critical Patch Update includes fixes for vulnerabilities that were publicly disclosed by David Litchfield at the BlackHat DC Conference in early February (CVE-2010-0866 and CVE-2010-0867). 5 of the 47 new vulnerabilities affect various components of the Oracle Fusion Middleware product family. The highest CVSS Base Score for these vulnerabilities is 7.5. Note that the patches for Oracle WebLogic Server are cumulative and this Critical Patch Update therefore also includes a fix for a vulnerability (CVE-2010-0073) that was the subject of a Security Alert issued by Oracle on February 4, 2010. Customers, who have not applied the previously-released patch, should apply today's Critical Patch Update as soon as possible. As stated at the beginning of this blog, it is also noteworthy to highlight that this Critical Patch Update provides 16 new fixes for the Sun product line. With the recent close of the Sun acquisition both security organizations have worked diligently to align Sun's previous security practices with Oracle's. Java users know that Oracle released a Critical Patch Update for Java SE and Java For Business earlier this month (in accordance with the Java patching schedule previously published by Sun Microsystems). Please note that for the first time, the Java advisories included CVSS Scores to help assess the severity of the new vulnerabilities fixed with the advisory. The rapid inclusion of the Solaris product lines in the Critical Patch Update and the extension of Oracle Software Security Assurance to Sun technologies are evidence of the flexibility of Oracle's security assurance programs. These should also result in tangible security benefits for the users of the Oracle hardware and software stack (such as a predictable patching schedule for all Oracle products).

    Read the article

  • Camera rotation - First Person Camera using GLM

    - by tempvar
    I've just switched from deprecated opengl functions to using shaders and GLM math library and i'm having a few problems setting up my camera rotations (first person camera). I'll show what i've got setup so far. I'm setting up my ViewMatrix using the glm::lookAt function which takes an eye position, target and up vector // arbitrary pos and target values pos = glm::vec3(0.0f, 0.0f, 10.0f); target = glm::vec3(0.0f, 0.0f, 0.0f); up = glm::vec3(0.0f, 1.0f, 0.0f); m_view = glm::lookAt(pos, target, up); i'm using glm::perspective for my projection and the model matrix is just identity m_projection = glm::perspective(m_fov, m_aspectRatio, m_near, m_far); model = glm::mat4(1.0); I send the MVP matrix to my shader to multiply the vertex position glm::mat4 MVP = camera->getProjection() * camera->getView() * model; // in shader gl_Position = MVP * vec4(vertexPos, 1.0); My camera class has standard rotate and translate functions which call glm::rotate and glm::translate respectively void camera::rotate(float amount, glm::vec3 axis) { m_view = glm::rotate(m_view, amount, axis); } void camera::translate(glm::vec3 dir) { m_view = glm::translate(m_view, dir); } and i usually just use the mouse delta position as the amount for rotation Now normally in my previous opengl applications i'd just setup the yaw and pitch angles and have a sin and cos to change the direction vector using (gluLookAt) but i'd like to be able to do this using GLM and matrices. So at the moment i have my camera set 10 units away from the origin facing that direction. I can see my geometry fine, it renders perfectly. When i use my rotation function... camera->rotate(mouseDeltaX, glm::vec3(0, 1, 0)); What i want is for me to look to the right and left (like i would with manipulating the lookAt vector with gluLookAt) but what's happening is It just rotates the model i'm looking at around the origin, like im just doing a full circle around it. Because i've translated my view matrix, shouldn't i need to translate it to the centre, do the rotation then translate back away for it to be rotating around the origin? Also, i've tried using the rotate function around the x axis to get pitch working, but as soon as i rotate the model about 90 degrees, it starts to roll instead of pitch (gimbal lock?). Thanks for your help guys, and if i've not explained it well, basically i'm trying to get a first person camera working with matrix multiplication and rotating my view matrix is just rotating the model around the origin.

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

  • OpenGL - Calculating camera view matrix

    - by Karle
    Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows: gl_Position = projection * view * model * vec4(in_Position, 1.0); When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C# using the OpenTK library. Translation (Working) I've created a test scene as follows: From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(modelPosition); Matrix4 model = translation; The view matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(-cameraPosition); Matrix4 view = translation; Rotation (Not-Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors: // Hard coded example orientation: // Normally calculated from up and forward // Similar to look-at camera. Vector3 r = Vector.UnitX; Vector3 u = Vector3.UnitY; Vector3 f = -Vector3.UnitZ; Matrix4 rot = new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f); This results in the following matrix being created: I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as: // OpenTK is row-major so the order of operations is reversed: Matrix4 view = translation * rot; Rotation almost works now but the -Z/+Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as: Vector3 position; Vector3 up; Vector3 forward; Apologies for writing such a long question and thank you in advance. I've tried following tutorials/guides from many sites but I keep ending up with something wrong. Edit: Projection Matrix Set-up Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 * Math.PI), (float)display.Width / display.Height, 0.1f, 1000.0f);

    Read the article

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • box2d resize bodies arround point

    - by philipp
    I have a compound object, consisting of a b2Body, vector-graphics and a list polygons which describe the b2body's shapes. This object has its own transformation matrix to centralize the storage of transformations. So far everything is working quiet fine, even scaling works, but not if i scale around a point. In the initialization phase of the object it is scaled around a point. This happens in this order: transform the main matrix transform the vector graphics and the polygons recreate the b2Body After this function ran, the shapes and all the graphics are exactly where they should be, BUT: after the first steps of the b2World the graphical stuff moves away from the body. When I ran the debugger I found out that the position of the body is 0/0 the red dot shows the center of scaling. the first image shows the basic setup and the second the final position of the graphics. This distance stays constant for the rest of the simulation. If I set the position via myBody.SetPosition( sx, sy ); the whole scenario just plays a bit more distant for the origin. Any Idea how to fix this? EDIT:: I came deeper down to the problem and it lies in the fact that i must not scale the transform matrix for the b2body shapes around the center, but set the b2body's position back to the point after scaling. But how can I calculate that point? EDIT 2 :: I came ever deeper down to it, even solved it, but this is a slow solution and i hope that there is somebody who understands what formula I need. assuming to have a set polygons relative to an origin as basis shapes for a b2body: scaling the whole object around a certain point is done in the following steps: i scale everything around the center except the polygons i create a clone of the polygons matrix i scale this clone around the point i calculate dx, dy as difference of clone.tx - original.tx and clone.ty - original.ty i scale the original polygon matrix NOT around the point i recreate the body i create the fixture i set the position of the body to dx and dy done! So what i an interested in is a formula for dx and dy without cloning matrices, scaling the clone around a point, getting dx and dy and finally scale the vertex matrix.

    Read the article

  • How can I attach a model to the bone of another model?

    - by kaykayman
    I am trying to attach one animated model to one of the bones of another animated model in an XNA game. I've found a few questions/forum posts/articles online which explain how to attach a weapon model to the bone of another model (which is analogous to what I'm trying to achieve), but they don't seem to work for me. So as an example: I want to attach Model A to a specific bone in Model B. Question 1. As I understand it, I need to calculate the transforms which are applied to the bone on Model B and apply these same transforms to every bone in Model A. Is this right? Question 2. This is my code for calculating the Transforms on a specific bone. private Matrix GetTransformPaths(ModelBone bone) { Matrix result = Matrix.Identity; while (bone != null) { result = result * bone.Transform; bone = bone.Parent; } return result; } The maths of Matrices is almost entirely lost on me, but my understanding is that the above will work its way up the bone structure to the root bone and my end result will be the transform of the original bone relative to the model. Is this right? Question 3. Assuming that this is correct I then expect that I should either apply this to each bone in Model A, or in my Draw() method: private void DrawModel(SceneModel model, GameTime gametime) { foreach (var component in model.Components) { Matrix[] transforms = new Matrix[component.Model.Bones.Count]; component.Model.CopyAbsoluteBoneTransformsTo(transforms); Matrix parenttransform = Matrix.Identity; if (!string.IsNullOrEmpty(component.ParentBone)) parenttransform = GetTransformPaths(model.GetBone(component.ParentBone)); component.Player.Update(gametime.ElapsedGameTime, true, Matrix.Identity); Matrix[] bones = component.Player.GetSkinTransforms(); foreach (SkinnedEffect effect in mesh.Effects) { effect.SetBoneTransforms(bones); effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(model.Angle)) * Matrix.CreateTranslation(model.Position) * parenttransform; effect.View = getView(); effect.Projection = getProjection(); effect.Alpha = model.Opacity; } } mesh.Draw(); } I feel as though I have tried every conceivable way of incorporating the parenttransform value into the draw method. The above is my most recent attempt. Is what I'm trying to do correct? And if so, is there a reason it doesn't work? The above Draw method seems to transpose the models x/z position - but even at these wrong positions, they do not account for the animation of Model B at all. Note: As will be evident from the code my "model" is comprised of a list of "components". It is these "components" that correspond to a single "Microsoft.Xna.Framework.Graphics.Model"

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • OpenGL ES 2 jittery camera movement

    - by user16547
    First of all, I am aware that there's no camera in OpenGL (ES 2), but from my understanding proper manipulation of the projection matrix can simulate the concept of a camera. What I'm trying to do is make my camera follow my character. My game is 2D, btw. I think the principle is the following (take Super Mario Bros or Doodle Jump as reference - actually I'm trying to replicate the mechanics of the latter): when the caracter goes beyond the center of the screen (in the positive axis/direction), update the camera to be centred on the character. Else keep the camera still. I did accomplish that, however the camera movement is noticeably jittery and I ran out of ideas how to make it smoother. First of all, my game loop (following this article): private int TICKS_PER_SECOND = 30; private int SKIP_TICKS = 1000 / TICKS_PER_SECOND; private int MAX_FRAMESKIP = 5; @Override public void run() { loops = 0; if(firstLoop) { nextGameTick = SystemClock.elapsedRealtime(); firstLoop = false; } while(SystemClock.elapsedRealtime() > nextGameTick && loops < MAX_FRAMESKIP) { step(); nextGameTick += SKIP_TICKS; loops++; } interpolation = ( SystemClock.elapsedRealtime() + SKIP_TICKS - nextGameTick ) / (float)SKIP_TICKS; draw(); } And the following code deals with moving the camera. I was unsure whether to place it in step() or draw(), but it doesn't make a difference to my problem at the moment, as I tried both and neither seemed to fix it. center just represents the y coordinate of the centre of the screen at any time. Initially it is 0. The camera object is my own custom "camera" which basically is a class that just manipulates the view and projection matrices. if(character.getVerticalSpeed() >= 0) { //only update camera if going up float[] projectionMatrix = camera.getProjectionMatrix(); if( character.getY() > center) { center += character.getVerticalSpeed(); cameraBottom = center + camera.getBottom(); cameraTop = center + camera.getTop(); Matrix.orthoM(projectionMatrix, 0, camera.getLeft(), camera.getRight(), center + camera.getBottom(), center + camera.getTop(), camera.getNear(), camera.getFar()); } } Any thought about what I should try or what I am doing wrong? Update 1: I think I updated every value you can see on screen to check whether the jittery movement is affected by that, but nothing changed, so something must be fundamentally flawed with my approach/calculations.

    Read the article

  • Glm Vector Transformations [duplicate]

    - by Reanimation
    This question already has an answer here: Car-like Physics - Basic Maths to Simulate Steering 2 answers I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >