Search Results

Search found 428 results on 18 pages for 'inverse'.

Page 12/18 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How to raycast select a scaled OBB?

    - by user3254944
    I have the OBB picking code to select an OBB with code inspired from Real time Rendering 3 and opengl-tutorial.org. I can successfully select objects that have been moved or rotated. However, I cant correctly select an object that has been scaled. The bounding box scales right, but the I can only select the object in a thin strip on its center. How do I fix the checkForHits() function to allow it to read the scaling that I passed to it in the raycast matrix? void GLWidget::selectObjRaycast() { glm::vec2 mouse = (glm::vec2(mousePos.x(), mousePos.y()) / glm::vec2(this->width(), this->height())) * 2.0f - 1.0f; mouse.y *= -1; glm::mat4 toWorld = glm::inverse(ProjectionM * ViewM); glm::vec4 from = toWorld * glm::vec4(mouse, -1.0f, 1.0f); glm::vec4 to = toWorld * glm::vec4(mouse, 1.0f, 1.0f); from /= from.w; to /= to.w; fromAABB = glm::vec3(from); toAABB = glm::normalize(glm::vec3(to - from)); checkForHits(); } void GLWidget::checkForHits() { for (int i = 0; i < myWin.myEtc->allObj.size(); ++i) //check for hits on each obj's bb { bool miss = 0; float tMin = 0.0f; float tMax = 100000.0f; glm::vec3 bbPos(myWin.myEtc->allObj[i]->raycastM[3].x, myWin.myEtc->allObj[i]->raycastM[3].y, myWin.myEtc->allObj[i]->raycastM[3].z); glm::vec3 delta = bbPos - fromAABB; for (int j = 0; j < 3; ++j) { glm::vec3 axis(myWin.myEtc->allObj[i]->raycastM[j].x, myWin.myEtc->allObj[i]->raycastM[j].y, myWin.myEtc->allObj[i]->raycastM[j].z); float e = glm::dot(axis, delta); float f = glm::dot(toAABB, axis); if (fabs(f) > 0.001f) { float t1 = (e + myWin.myEtc->allObj[i]->bbMin[j]) / f; float t2 = (e + myWin.myEtc->allObj[i]->bbMax[j]) / f; if (t1 > t2) { float w = t1; t1 = t2; t2 = w; } if (t2 < tMax) tMax = t2; if (t1 > tMin) tMin = t1; if (tMax < tMin) miss = 1; } else { if (-e + myWin.myEtc->allObj[i]->bbMin[j] > 0.0f || -e + myWin.myEtc->allObj[i]->bbMax[j] < 0.0f) miss = 1; } } if (miss == 0) { intersection_distance = tMin; myWin.myEtc->sel.push_back(myWin.myEtc->allObj[i]); myWin.myEtc->allObj[i]->highlight = myWin.myGLHelp->highlight; break; } } } void Object::render(glm::mat4 PV) { scaleM = glm::scale(glm::mat4(), s->val_3); r_quat = glm::quat(glm::radians(r->val_3)); rotationM = glm::toMat4(r_quat); translationM = glm::translate(glm::mat4(), t->val_3); transLocal1M = glm::translate(glm::mat4(), -rsPivot->val_3); transLocal2M = glm::translate(glm::mat4(), rsPivot->val_3); raycastM = translationM * transLocal2M * rotationM * scaleM * transLocal1M; // MVP = PV * translationM * transLocal2M * rotationM * scaleM * transLocal1M; }

    Read the article

  • C#: A "Dumbed-Down" C++?

    - by James Michael Hare
    I was spending a lovely day this last weekend watching my sons play outside in one of the better weekends we've had here in Saint Louis for quite some time, and whilst watching them and making sure no limbs were broken or eyes poked out with sticks and other various potential injuries, I was perusing (in the correct sense of the word) this month's MSDN magazine to get a sense of the latest VS2010 features in both IDE and in languages. When I got to the back pages, I saw a wonderful article by David S. Platt entitled, "In Praise of Dumbing Down"  (msdn.microsoft.com/en-us/magazine/ee336129.aspx).  The title captivated me and I read it and found myself agreeing with it completely especially as it related to my first post on divorcing C++ as my favorite language. Unfortunately, as Mr. Platt mentions, the term dumbing-down has negative connotations, but is really and truly a good thing.  You are, in essence, taking something that is extremely complex and reducing it to something that is much easier to use and far less error prone.  Adding safeties to power tools and anti-kick mechanisms to chainsaws are in some sense "dumbing them down" to the common user -- but that also makes them safer and more accessible for the common user.  This was exactly my point with C++ and C#.  I did not mean to infer that C++ was not a useful or good language, but that in a very high percentage of cases, is too complex and error prone for the job at hand. Choosing the correct programming language for a job is a lot like choosing any other tool for a task.  For example: if I want to dig a French drain in my lawn, I can attempt to use a huge tractor-like backhoe and the job would be done far quicker than if I would dig it by hand.  I can't deny that the backhoe has the raw power and speed to perform.  But you also cannot deny that my chances of injury or chances of severing utility lines or other resources climb at an exponential rate inverse to the amount of training I may have on that machinery. Is C++ a powerful tool?  Oh yes, and it's great for those tasks where speed and performance are paramount.  But for most of us, it's the wrong tool.  And keep in mind, I say this even though I have 17 years of experience in using it and feel myself highly adept in utilizing its features both in the standard libraries, the STL, and in supplemental libraries such as BOOST.  Which, although greatly help with adding powerful features quickly, do very little to curb the relative dangers of the language. So, you may say, the fault is in the developer, that if the developer had some higher skills or if we only hired C++ experts this would not be an issue.  Now, I will concede there is some truth to this.  Obviously, the higher skilled C++ developers you hire the better the chance they will produce highly performant and error-free code.  However, what good is that to the average developer who cannot afford a full stable of C++ experts? That's my point with C#:  It's like a kinder, gentler C++.  It gives you nearly the same speed, and in many ways even more power than C++, and it gives you a much softer cushion for novices to fall against if they code less-than-optimally.  A bug is a bug, of course, in any language, but C# does a good job of hiding and taking on the task of handling almost all of the resource issues that make C++ so tricky.  For my money, C# is much more maintainable, more feature-rich, second only slightly in performance, faster to market, and -- last but not least -- safer and easier to use.  That's why, where I work, I much prefer to see the developers moving to C#.  The quantity of bugs is much lower, and we don't need to hire "experts" to achieve the same results since the language itself handles those resource pitfalls so prevalent in poorly written C++ code.  C++ will still have its place in the world, and I'm sure I'll still use it now and again where it is truly the correct tool for the job, but for nearly every other project C# is a wonderfully "dumbed-down" version of C++ -- in the very best sense -- and to me, that's the smart choice.

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Quaternion dfference + time --> angular velocity (gyroscope in physics library)

    - by AndrewK
    I am using Bullet Physic library to program some function, where I have difference between orientation from gyroscope given in quaternion and orientation of my object, and time between each frame in milisecond. All I want is set the orientation from my gyroscope to orientation of my object in 3D space. But all I can do is set angular velocity to my object. I have orientation difference and time, and from that I calculate vector of angular velocity [Wx,Wy,Wz] from that formula: W(t) = 2 * dq(t)/dt * conj(q(t)) My code is: btQuaternion diffQuater = gyroQuater - boxQuater; btQuaternion conjBoxQuater = gyroQuater.inverse(); btQuaternion velQuater = ((diffQuater * 2.0f) / d_time) * conjBoxQuater; And everything works well, till I get: 1 rotating around Y axis, angle about 60 degrees, then I have these values in 2 critical frames: x: -0.013220 y: -0.038050 z: -0.021979 w: -0.074250 - diffQuater x: 0.120094 y: 0.818967 z: 0.156797 w: -0.538782 - gyroQuater x: 0.133313 y: 0.857016 z: 0.178776 w: -0.464531 - boxQuater x: 0.207781 y: 0.290452 z: 0.245594 - diffQuater -> euler angles x: 3.153619 y: -66.947929 z: 175.936615 - gyroQuater -> euler angles x: 4.290697 y: -57.553043 z: 173.320053 - boxQuater -> euler angles x: 0.138128 y: 2.823307 z: 1.025552 w: 0.131360 - velQuater d_time: 0.058000 x: 0.211020 y: 1.595124 z: 0.303650 w: -1.143846 - diffQuater x: 0.089518 y: 0.771939 z: 0.144527 w: -0.612543 - gyroQuater x: -0.121502 y: -0.823185 z: -0.159123 w: 0.531303 - boxQuater x: nan y: nan z: nan - diffQuater -> euler angles x: 2.985240 y: -76.304405 z: -170.555054 - gyroQuater -> euler angles x: 3.269681 y: -65.977966 z: 175.639420 - boxQuater -> euler angles x: -0.730262 y: -2.882153 z: -1.294721 w: 63.325996 - velQuater d_time: 0.063000 2 rotating around X axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.013045 y: -0.004186 z: -0.005667 w: -0.022482 - diffQuater x: -0.848030 y: -0.187985 z: 0.114400 w: 0.482099 - gyroQuater x: -0.834985 y: -0.183799 z: 0.120067 w: 0.504580 - boxQuater x: 0.036336 y: 0.002312 z: 0.020859 - diffQuater -> euler angles x: -113.129463 y: 0.731925 z: 25.415056 - gyroQuater -> euler angles x: -110.232368 y: 0.860897 z: 25.350458 - boxQuater -> euler angles x: -0.865820 y: -0.456086 z: 0.034084 w: 0.013184 - velQuater d_time: 0.055000 x: -1.721662 y: -0.387898 z: 0.229844 w: 0.910235 - diffQuater x: -0.874310 y: -0.200132 z: 0.115142 w: 0.426933 - gyroQuater x: 0.847352 y: 0.187766 z: -0.114703 w: -0.483302 - boxQuater x: -144.402298 y: 4.891629 z: 71.309158 - diffQuater -> euler angles x: -119.515343 y: 1.745076 z: 26.646086 - gyroQuater -> euler angles x: -112.974533 y: 0.738675 z: 25.411509 - boxQuater -> euler angles x: 2.086195 y: 0.676526 z: -0.424351 w: 70.104248 - velQuater d_time: 0.057000 2 rotating around Z axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.000736 y: 0.002812 z: -0.004692 w: -0.008181 - diffQuater x: -0.003829 y: 0.012045 z: -0.868035 w: 0.496343 - gyroQuater x: -0.003093 y: 0.009232 z: -0.863343 w: 0.504524 - boxQuater x: -0.000822 y: -0.003032 z: 0.004162 - diffQuater -> euler angles x: -1.415189 y: 0.304210 z: -120.481873 - gyroQuater -> euler angles x: -1.091881 y: 0.227784 z: -119.399445 - boxQuater -> euler angles x: 0.159042 y: 0.169228 z: -0.754599 w: 0.003900 - velQuater d_time: 0.025000 x: -0.007598 y: 0.024074 z: -1.749412 w: 0.968588 - diffQuater x: -0.003769 y: 0.012030 z: -0.881377 w: 0.472245 - gyroQuater x: 0.003829 y: -0.012045 z: 0.868035 w: -0.496343 - boxQuater x: -5.645197 y: 1.148993 z: -146.507187 - diffQuater -> euler angles x: -1.418294 y: 0.270319 z: -123.638245 - gyroQuater -> euler angles x: -1.415183 y: 0.304208 z: -120.481873 - boxQuater -> euler angles x: 0.017498 y: -0.013332 z: 2.040073 w: 148.120056 - velQuater d_time: 0.027000 The problem is the most visible in diffQuater - euler angles vector. Can someone tell me why it is like that? and how to solve that problem? All suggestions are welcome.

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • Self-reference entity in Hibernate

    - by Marco
    Hi guys, I have an Action entity, that can have other Action objects as child in a bidirectional one-to-many relationship. The problem is that Hibernate outputs the following exception: "Repeated column in mapping for collection: DbAction.childs column: actionId" Below the code of the mapping: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="DbAction" table="actions"> <id name="actionId" type="short" /> <property not-null="true" name="value" type="string" /> <set name="childs" table="action_action" cascade="all-delete-orphan"> <key column="actionId" /> <many-to-many column="actionId" unique="true" class="DbAction" /> </set> <join table="action_action" inverse="true" optional="false"> <key column="actionId" /> <many-to-one name="parentAction" column="actionId" not-null="true" class="DbAction" /> </join> </class> </hibernate-mapping>

    Read the article

  • Rails ActiveRecord - Best way to perform an include?

    - by dwhite
    I have three models: class Book < ActiveRecord::Base has_many :collections has_many :users, :through => :collections end class User < ActiveRecord::Base has_many :collections has_many :books, :through => :collections end class Collection < ActiveRecord::Base belongs_to :book belongs_to :user end I'm trying to display a list of the books and have a link to either add or remove from the user's collection. I can't quite figure out the best syntax to do this. For example, if I do the following: Controller class BooksController < ApplicationController def index @books = Book.all end end View ... <% if book.users.include?(current_user) %> ... or obviously the inverse... ... <% if current_user.books.include?(book) %> ... Then queries are sent for each book to check on that include? which is wasteful. I was thinking of adding the users or collections to the :include on the Book.all, but I'm not sure this is the best way. Effectively all I need is the book object and just a boolean column of whether or not the current user has the book in their collection, but I'm not sure how to forumlate the query in order to do that. Thanks in advance for your help. -Damien

    Read the article

  • Efficient Multiple Linear Regression in C# / .Net

    - by mrnye
    Does anyone know of an efficient way to do multiple linear regression in C#, where the number of simultaneous equations may be in the 1000's (with 3 or 4 different inputs). After reading this article on multiple linear regression I tried implementing it with a matrix equation: Matrix y = new Matrix( new double[,]{{745}, {895}, {442}, {440}, {1598}}); Matrix x = new Matrix( new double[,]{{1, 36, 66}, {1, 37, 68}, {1, 47, 64}, {1, 32, 53}, {1, 1, 101}}); Matrix b = (x.Transpose() * x).Inverse() * x.Transpose() * y; for (int i = 0; i < b.Rows; i++) { Trace.WriteLine("INFO: " + b[i, 0].ToDouble()); } However it does not scale well to the scale of 1000's of equations due to the matrix inversion operation. I can call the R language and use that, however I was hoping there would be a pure .Net solution which will scale to these large sets. Any suggestions? EDIT #1: I have settled using R for the time being. By using statconn (downloaded here) I have found it to be both fast & relatively easy to use this method. I.e. here is a small code snippet, it really isn't much code at all to use the R statconn library (note: this is not all the code!). _StatConn.EvaluateNoReturn(string.Format("output <- lm({0})", equation)); object intercept = _StatConn.Evaluate("coefficients(output)['(Intercept)']"); parameters[0] = (double)intercept; for (int i = 0; i < xColCount; i++) { object parameter = _StatConn.Evaluate(string.Format("coefficients(output)['x{0}']", i)); parameters[i + 1] = (double)parameter; }

    Read the article

  • Twitter Bootstrap 3 top nav dropdown covered by subnav bar

    - by Aaron Qian
    I'm creating a Twitter Bootstrap main nav and sub nav with a drop down menu on the main nav. The problem is that I can never get the drop down menu to be shown above the sub nav. Here is the HTML: <div class="mainnav navbar navbar-fixed-top navbar-inverse"> <div class="navbar-inner"> <div class="container"> <ul class="nav"> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown"> Settings </a> <ul class="dropdown-menu"> <li><a>test 1</a></li> <li><a>test 2</a></li> </ul> </li> </ul> </div> </div> </div> <div class="subnav navbar navbar-fixed-top"> <div class="navbar-inner"> <div class="container"> <ul class="nav"> <li><a>foo</a></li> <li><a>bar</a></li> </ul> </div> </div> </div> CSS: .subnav { top: 40px; } .mainnav .dropdown-menu { // how can I show this menu above the subnav??? }

    Read the article

  • NHibernate.PropertyValueException : not-null property references a null or transient

    - by frosty
    I am getting the following exception. NHibernate.PropertyValueException : not-null property references a null or transient Here are my mapping files. Product <class name="Product" table="Products"> <id name="Id" type="Int32" column="Id" unsaved-value="0"> <generator class="identity"/> </id> <set name="PriceBreaks" table="PriceBreaks" generic="true" cascade="all" inverse="true" > <key column="ProductId" /> <one-to-many class="EStore.Domain.Model.PriceBreak, EStore.Domain" /> </set> </class> Price Breaks <class name="PriceBreak" table="PriceBreaks"> <id name="Id" type="Int32" column="Id" unsaved-value="0"> <generator class="identity"/> </id> <property name="ProductId" column="ProductId" type="Int32" not-null="true" /> <many-to-one name="Product" column="ProductId" not-null="true" cascade="all" class="EStore.Domain.Model.Product, EStore.Domain" /> </class> I get the exception on the following method [Test] public void Can_Add_Price_Break() { IPriceBreakRepository repo = new PriceBreakRepository(); var priceBreak = new PriceBreak(); priceBreak.ProductId = 19; repo.Add(priceBreak); Assert.Greater(priceBreak.Id, 0); }

    Read the article

  • Big problem with Dijkstra algorithm in a linked list graph implementation

    - by Nazgulled
    Hi, I have my graph implemented with linked lists, for both vertices and edges and that is becoming an issue for the Dijkstra algorithm. As I said on a previous question, I'm converting this code that uses an adjacency matrix to work with my graph implementation. The problem is that when I find the minimum value I get an array index. This index would have match the vertex index if the graph vertexes were stored in an array instead. And the access to the vertex would be constant. I don't have time to change my graph implementation, but I do have an hash table, indexed by a unique number (but one that does not start at 0, it's like 100090000) which is the problem I'm having. Whenever I need, I use the modulo operator to get a number between 0 and the total number of vertices. This works fine for when I need an array index from the number, but when I need the number from the array index (to access the calculated minimum distance vertex in constant time), not so much. I tried to search for how to inverse the modulo operation, like, 100090000 mod 18000 = 10000 and, 10000 invmod 18000 = 100090000 but couldn't find a way to do it. My next alternative is to build some sort of reference array where, in the example above, arr[10000] = 100090000. That would fix the problem, but would require to loop the whole graph one more time. Do I have any better/easier solution with my current graph implementation?

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • Using R in Processing through rJava/JRI?

    - by tommy-o-dell
    Hi guys, Is it possible to run R in Processing through rJava/JRI? If I deployed a Processing app on the web, would the client need R on their system? I'm looking to create an interactive information dashboard that I can deploy on the web. It seems that Processing is probably my best bet for the interactive/web part of things. Unfortunately, it doesn't look like there are many math/stats functions built-in. And there aren't any libraries for plotting data either. I've been using R and gpplot2 for a few months and am thrilled (amazed) at how easily it manipulates and plots data. So I'm wondering now if can get the best of both worlds and run R through a Processing applet. From the JRI website: JRI is a Java/R Interface, which allows to run R inside Java applications as a single thread. Basically it loads R dynamic library into Java and provides a Java API to R functionality. It supports both simple calls to R functions and a full running REPL. In a sense JRI is the inverse of rJava and both can be combined (i.e. you can run R code inside JRI that calls back to the JVM via rJava). The JGR project makes the full use of both JRI and rJava to provide a full Java GUI for R. JRI uses native code, but it supports all platforms where Sun's Java (or compatible) is available, including Windows, Mac OS X, Sun and Linux (both 32-bit and 64-bit). Thanks for the advice :)

    Read the article

  • Doubts About Core Data NSManagedObject Deep Copy

    - by Jigzat
    Hello everyone, I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it. In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash. For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes for (NSString *attr in relationships) { [cloned setValue:[source valueForKey:attr] forKey:attr]; } but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one. Can some one please enlighten me about this?

    Read the article

  • Encoding / Error Correction Challenge

    - by emi1faber
    Is it mathematically feasible to encode and initial 4 byte message into 8 bytes and if one of the 8 bytes is completely dropped and another is wrong to reconstruct the initial 4 byte message? There would be no way to retransmit nor would the location of the dropped byte be known. If one uses Reed Solomon error correction with 4 "parity" bytes tacked on to the end of the 4 "data" bytes, such as DDDDPPPP, and you end up with DDDEPPP (where E is an error) and a parity byte has been dropped, I don't believe there's a way to reconstruct the initial message (although correct me if I am wrong)... What about multiplying (or performing another mathematical operation) the initial 4 byte message by a constant, then utilizing properties of an inverse mathematical operation to determine what byte was dropped. Or, impose some constraints on the structure of the message so every other byte needs to be odd and the others need to be even. Alternatively, instead of bytes, it could also be 4 decimal digits encoded in some fashion into 8 decimal digits where errors could be detected & corrected under the same circumstances mentioned above - no retransmission and the location of the dropped byte is not known. I'm looking for any crazy ideas anyone might have... Any ideas out there?

    Read the article

  • What are the correct bindings for an NSComboBox for use with Core Data

    - by theMikeSwan
    Imagine if you will a Core Data app with two entities (Employee, and Department). Employees have a to-one relationship with department (department) and the inverse is a to-many relationship (employees). In the UI you can select individual Employee entities and edit the details in a detail area (there are of course other attributes and there is UI for adding and editing Department entities). When using a popup button the bindings are: content = PopUpArrayController.arrangedObjects content values = PopUpArrayController.arrangedObjects.name (name is an NSString) selected object = EmployeeArrayController.selection.department.name This allows for viewing of all departments in the popup menu, correct selection of the current Employee's department, and allows that department to be changed as expected. The goal is to change this for an NSComboBox so that the user can tab to the box and type the department name in without switching to the mouse. I have tried numerous different bindings to accomplish this. I even had it work for one run with these bindings: content = PopUpArrayController.arrangedObjects.name value = EmployeeArrayController.selection.department.name At least once this worked as expected (it even added a new department when the entered text did not match any existing department). Now however it will display the available Departments and auto complete but will not update the model with the correct value when the value is changed in the combo box. If the Department is set or changed with the popup the correct department is shown in the combo box. Does anyone know what I am missing? Thanks.

    Read the article

  • Nhibernate , collections and compositeid

    - by Ciaran
    Hi, banging my head here and thought that some one out there might be able to help. Have Tables below. Bucket( bucketId smallint (PK) name varchar(50) ) BucketUser( UserId varchar(10) (PK) bucketId smallint (PK) ) The composite key is not the problem thats ok I know how to get around this but I want my bucket class to contanin a IList of BucketUser. I read the online reference and thought that I had cracked it but havent. The two mappings are below -- bucket -- <id name="BucketId" column="BucketId" type="Int16" unsaved-value="0"> <generator class="native"/> </id> <property column="BucketName" type="String" name="BucketName"/> <bag name="Users" table="BucketUser" inverse="true" generic="true" lazy="true"> <key> <column name="BucketId" sql-type="smallint"/> <column name="UserId" sql-type="varchar"/> </key> <one-to-many class="Bucket,Impact.Dice.Core" not-found="ignore"/> </bag> -- bucketUser --

    Read the article

  • Haskell Linear Algebra Matrix Library for Arbitrary Element Types

    - by Johannes Weiß
    I'm looking for a Haskell linear algebra library that has the following features: Matrix multiplication Matrix addition Matrix transposition Rank calculation Matrix inversion is a plus and has the following properties: arbitrary element (scalar) types (in particular element types that are not Storable instances). My elements are an instance of Num, additionally the multiplicative inverse can be calculated. The elements mathematically form a finite field (??2256). That should be enough to implement the features mentioned above. arbitrary matrix sizes (I'll probably need something like 100x100, but the matrix sizes will depend on the user's input so it should not be limited by anything else but the memory or the computational power available) as fast as possible, but I'm aware that a library for arbitrary elements will probably not perform like a C/Fortran library that does the work (interfaced via FFI) because of the indirection of arbitrary (non Int, Double or similar) types. At least one pointer gets dereferenced when an element is touched (written in Haskell, this is not a real requirement for me, but since my elements are no Storable instances the library has to be written in Haskell) I already tried very hard and evaluated everything that looked promising (most of the libraries on Hackage directly state that they wont work for me). In particular I wrote test code using: hmatrix, assumes Storable elements Vec, but the documentation states: Low Dimension : Although the dimensionality is limited only by what GHC will handle, the library is meant for 2,3 and 4 dimensions. For general linear algebra, check out the excellent hmatrix library and blas bindings I looked into the code and the documentation of many more libraries but nothing seems to suit my needs :-(. Update Since there seems to be nothing, I started a project on GitHub which aims to develop such a library. The current state is very minimalistic, not optimized for speed at all and only the most basic functions have tests and therefore should work. But should you be interested in using or helping out developing it: Contact me (you'll find my mail address on my web site) or send pull requests.

    Read the article

  • Core data relationship memory leak

    - by cfihelp
    I have a strange (to me) memory leak when accessing an entity in a relationship. Series and Tiles have an inverse relationship to each other. // set up the fetch request NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Series" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // grab all of the series in the core data store NSError *error = nil; availableSeries = [[NSArray alloc] initWithArray:[managedObjectContext executeFetchRequest:fetchRequest error:&error]]; [fetchRequest release]; // grab one of the series Series *currentSeries = [availableSeries objectAtIndex:1]; // load all of the tiles attached to the series through the relationship NSArray *myTiles = [currentSeries.tile allObjects]; // 16 byte leak here! Instruments reports back that the final line has a 16 byte leak cause by NSPlaceHolderString. Stack trace: 2 UIKit UIApplicationMain 3 UIKit -[UIApplication _run] 4 CoreFoundation CFRunLoopRunInMode 5 CoreFoundation CFRunLoopRunSpecific 6 GraphicsServices PurpleEventCallback 7 UIKit _UIApplicationHandleEvent 8 UIKit -[UIApplication sendEvent:] 9 UIKit -[UIApplication handleEvent:withNewEvent:] 10 UIKit -[UIApplication _runWithURL:sourceBundleID:] 11 UIKit -[UIApplication _performInitializationWithURL:sourceBundleID:] 12 Memory -[AppDelegate_Phone application:didFinishLaunchingWithOptions:] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/AppDelegate_Phone.m:49 13 UIKit -[UIViewController view] 14 Memory -[HomeScreenController_Phone viewDidLoad] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/HomeScreenController_Phone.m:58 15 CoreData -[_NSFaultingMutableSet allObjects] 16 CoreData -[_NSFaultingMutableSet willRead] 17 CoreData -[NSFaultHandler retainedFulfillAggregateFaultForObject:andRelationship:withContext:] 18 CoreData -[NSSQLCore retainedRelationshipDataWithSourceID:forRelationship:withContext:] 19 CoreData -[NSSQLCore newFetchedPKsForSourceID:andRelationship:] 20 CoreData -[NSSQLCore rawSQLTextForToManyFaultStatement:stripBindVariables:swapEKPK:] 21 Foundation +[NSString stringWithFormat:] 22 Foundation -[NSPlaceholderString initWithFormat:locale:arguments:] 23 CoreFoundation _CFStringCreateWithFormatAndArgumentsAux 24 CoreFoundation _CFStringAppendFormatAndArgumentsAux 25 Foundation _NSDescriptionWithLocaleFunc 26 CoreFoundation -[NSObject respondsToSelector:] 27 libobjc.A.dylib class_respondsToSelector 28 libobjc.A.dylib lookUpMethod 29 libobjc.A.dylib _cache_addForwardEntry 30 libobjc.A.dylib _malloc_internal I think I'm missing something obvious but I can't quite figure out what. Thanks for your help! Update: I've copied the offending chunk of code to the first part of applicationDidFinishLaunching and it still leaks. Could there be something wrong with my model?

    Read the article

  • nHibernate Self Join Mapping

    - by kmoo01
    Hi Guys, This is probably incredibly simple, but I just cant see the wood for the trees at the moment. For brevity, I would like to model a word object, that has related words to it (synonyms), In doing so I could have the following mappings: <class name="Word" table="bs_word"> <id name="Id" column="WordId" type="Int32" unsaved-value="-1"> <generator class="native"> <param name="sequence"></param> </generator> </id> <property name="Key" column="word" type="String" length="50" /> <many-to-one name="SynonymGroup" class="BS.Core.Domain.Synonym, BS.Core" column="SynonymId" lazy="false"/> <class name="Synonym" table="bs_Synonym"> <id name="Id" column="SynonymId" type="Int32" unsaved-value="-1"> <generator class="native"> <param name="sequence"></param> </generator> </id> <property name="Alias" column="Alias" type="String" length="50" /> <bag name="Words" cascade="none" lazy="false" inverse="true"> <key column="SynonymId" /> <one-to-many class="Word" /> </bag> Mapping it like this would mean for a given word, I can access related words (synonyms) like this: word.SynonymGroup.Words However I would like to know if it is possible to map a bag of objects on an instance of a word object...if that makes sense, so I can access the related words like this: word.Words I've tried playing around with the map element, and composite elements, all to no avail - so I was wondering if some kind person could point me in the right direction? ta, kmoo01

    Read the article

  • Converting python collaborative filtering code to use Map Reduce

    - by Neil Kodner
    Using Python, I'm computing cosine similarity across items. given event data that represents a purchase (user,item), I have a list of all items 'bought' by my users. Given this input data (user,item) X,1 X,2 Y,1 Y,2 Z,2 Z,3 I build a python dictionary {1: ['X','Y'], 2 : ['X','Y','Z'], 3 : ['Z']} From that dictionary, I generate a bought/not bought matrix, also another dictionary(bnb). {1 : [1,1,0], 2 : [1,1,1], 3 : [0,0,1]} From there, I'm computing similarity between (1,2) by calculating cosine between (1,1,0) and (1,1,1), yielding 0.816496 I'm doing this by: items=[1,2,3] for item in items: for sub in items: if sub >= item: #as to not calculate similarity on the inverse sim = coSim( bnb[item], bnb[sub] ) I think the brute force approach is killing me and it only runs slower as the data gets larger. Using my trusty laptop, this calculation runs for hours when dealing with 8500 users and 3500 items. I'm trying to compute similarity for all items in my dict and it's taking longer than I'd like it to. I think this is a good candidate for MapReduce but I'm having trouble 'thinking' in terms of key/value pairs. Alternatively, is the issue with my approach and not necessarily a candidate for Map Reduce?

    Read the article

  • Pseudo code for instruction description

    - by Claus
    Hi, I am just trying to fiddle around what is the best and shortest way to describe two simple instructions with C-like pseudo code. The extract instruction is defined as follows: extract rd, rs, imm This instruction extracts the appropriate byte from the 32-bit source register rs and right justifies it in the destination register. The byte is specified by imm and thus can take the values 0 (for the least-significant byte) and 3 (for the most-significant byte). rd = 0x0; // zero-extend result, ie to make sure bits 31 to 8 are set to zero in the result rd = (rs && (0xff << imm)) >> imm; // this extracts the approriate byte and stores it in rd The insert instruction can be regarded as the inverse operation and it takes a right justified byte from the source register rs and deposits it in the appropriate byte of the destination register rd; again, this byte is determined by the value of imm tmp = 0x0 XOR (rs << imm)) // shift the byte to the appropriate byte determined by imm rd = (rd && (0x00 << imm)) // set appropriate byte to zero in rd rd = rd XOR tmp // XOR the byte into the destination register This looks all a bit horrible, so I wonder if there is a little bit a more elegant way to describe this bahaviour in C-like style ;) Many thanks, Claus

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18  | Next Page >