Search Results

Search found 1909 results on 77 pages for 'adjacency matrix'.

Page 35/77 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Bitmap issue in Samsung Galaxy S3

    - by user1531240
    I wrote a method to change Bitmap from camera shot : public Bitmap bitmapChange(Bitmap bm) { /* get original image size */ int w = bm.getWidth(); int h = bm.getHeight(); /* check the image's orientation */ float scale = w / h; if(scale < 1) { /* if the orientation is portrait then scaled and show on the screen*/ float scaleWidth = (float) 90 / (float) w; float scaleHeight = (float) 130 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } else { /* if the orientation is landscape then rotate 90 */ float scaleWidth = (float) 130 / (float) w; float scaleHeight = (float) 90 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); mtx.postRotate(90); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } } It works fine in another Android device, even Galaxy Nexus but in Samsung Galaxy S3, the scaled image doesn't show on screen. I tried to mark the bitmapChange method , let it show the original size Bitmap on screen but S3 also show nothing on screen. The information of variables in eclipse is here. The information of sony xperia is here. xperia and other device is working fine.

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • How does gluLookAt work?

    - by Chan
    From my understanding, gluLookAt( eye_x, eye_y, eye_z, center_x, center_y, center_z, up_x, up_y, up_z ); is equivalent to: glRotatef(B, 0.0, 0.0, 1.0); glRotatef(A, wx, wy, wz); glTranslatef(-eye_x, -eye_y, -eye_z); But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet: #include <stdlib.h> #include <stdio.h> #include <GL/glut.h> #include <iomanip> #include <iostream> #include <string> using namespace std; static const int Rx = 0; static const int Ry = 1; static const int Rz = 2; static const int Ux = 4; static const int Uy = 5; static const int Uz = 6; static const int Ax = 8; static const int Ay = 9; static const int Az = 10; static const int Tx = 12; static const int Ty = 13; static const int Tz = 14; void init() { glClearColor(0.0, 0.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); } void displayModelviewMatrix(float MV[16]) { int SPACING = 12; cout << left; cout << "\tMODELVIEW MATRIX\n"; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl; cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl; cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl; cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl; cout << "--------------------------------------------------" << endl; cout << endl; } void reshape(int w, int h) { float ratio = static_cast<float>(w)/h; glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, ratio, 1.0, 425.0); } void draw() { float m[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glGetFloatv(GL_MODELVIEW_MATRIX, m); gluLookAt( 300.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f ); glColor3f(1.0, 0.0, 0.0); glutSolidCube(100.0); glGetFloatv(GL_MODELVIEW_MATRIX, m); displayModelviewMatrix(m); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); glutInitWindowPosition(100, 100); glutCreateWindow("Demo"); glutReshapeFunc(reshape); glutDisplayFunc(draw); init(); glutMainLoop(); return 0; } No matter what value I use for the eye vector: 300, 0, 0 or 0, 300, 0 or 0, 0, 300 the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior? Here is the output with the eye vector is (0.0f, 300.0f, 0.0f) MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 0 0 0 0 0 0 0 0 0 1 0 -300 0 0 0 1 -------------------------------------------------- I would expect the T column to be (0, -300, 0)! So could anyone help me explain this? The implementation of gluLookAt from http://www.mesa3d.org void GLAPIENTRY gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx, GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy, GLdouble upz) { float forward[3], side[3], up[3]; GLfloat m[4][4]; forward[0] = centerx - eyex; forward[1] = centery - eyey; forward[2] = centerz - eyez; up[0] = upx; up[1] = upy; up[2] = upz; normalize(forward); /* Side = forward x up */ cross(forward, up, side); normalize(side); /* Recompute up as: up = side x forward */ cross(side, forward, up); __gluMakeIdentityf(&m[0][0]); m[0][0] = side[0]; m[1][0] = side[1]; m[2][0] = side[2]; m[0][1] = up[0]; m[1][1] = up[1]; m[2][1] = up[2]; m[0][2] = -forward[0]; m[1][2] = -forward[1]; m[2][2] = -forward[2]; glMultMatrixf(&m[0][0]); glTranslated(-eyex, -eyey, -eyez); }

    Read the article

  • Run-Time Check Failure #2 - Stack around the variable 'indices' was corrupted.

    - by numerical25
    well I think I know what the problem is. I am just having a hard time debugging it. I am working with the directx api and I am trying to generate a plane along the x and z axis according to a book I have. The problem is when I am creating my indices. I think I am setting values out of the bounds of the indices array. I am just having a hard time figuring out what I did wrong. I am unfamiliar with the this method of generating a plane. so its a little difficult for me. below is my code. Take emphasis on the indices loop. #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { VertexPos vertices[NUM_VERTSX * NUM_VERTSY]; for(int z=0; z < NUM_VERTSY; ++z) { for(int x = 0; x < NUM_VERTSX; ++x) { vertices[x + z * NUM_VERTSX].pos.x = (float)x * CELL_WIDTH; vertices[x + z * NUM_VERTSX].pos.z = (float)z * CELL_HEIGHT; vertices[x + z * NUM_VERTSX].pos.y = 0.0f; vertices[x + z * NUM_VERTSX].color = D3DXVECTOR4(1.0, 0.0f, 0.0f, 0.0f); } } DWORD indices[NUM_VERTSX * NUM_VERTSY]; int curIndex = 0; for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; curIndex += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • Moving sprites on a graph in libGDX

    - by nosferat
    In my game I'd like to move sprites on a fixed path. Until this point I was trying to stick with the tools already provided by libGDX, like the Tiled map renderer classes so I'm looking for a solution nearly as convenient as that, e.g. I'd like to avoid creating the adjacency matrix by hand. Tiled has the functionality to add objects to the map but I'm not sure if I can use it for this purpose. Any idea?

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • HOw to restart ssh on ubuntu

    - by Mirage
    I want to restart ssh or sshd but i get this qqqq@Matrix-Server:/$ sudo /etc/init.d/ssh stop sudo: /etc/init.d/ssh: command not found qqqq@Matrix-Server:/$ DO i need to install ssh or sshd or it comes with ubuntu

    Read the article

  • Warping Images using cvWarpPerspective Results in Some Parts of the images out of the viewable area

    - by Birkan Cilingir
    Hi, I am trying to stich two images together. In order to do so I extracted sift features and find matches on the two images using this C implementation. http://web.engr.oregonstate.edu/~hess/index.html After that I found the homography matrix using the matched points. http://www.ics.forth.gr/~lourakis/homest/ But if I use this Homography Matrix in "cvWarpPerspective" function, some of the parts of the image goes out of the viewable area (negative corrdinates). To solve this I tried to calculate the bounding box first by piping the four corners of the image through Homography matrix. And move the initial image then warp it. But this caused the warping result to change. Is there any way for warping an image and keeping it in the viewable area? I would appreciate any help. Thanks in advance...

    Read the article

  • Resizing and rotating an image in Android

    - by kingrichard2005
    I'm trying to rotate and resize an image in Android. The code I have is as follows, (taken from this tutorial): Bitmap originalSprite = BitmapFactory.decodeResource(getResources(), R.drawable.android); int orgHeight = a.getHeight(); int orgWidth = a.getWidth(); //Create manipulation matrix Matrix m = new Matrix(); // resize the bit map m.postScale(.25f, .25f); // rotate the Bitmap by the given angle m.postRotate(180); //Rotated bitmap Bitmap rotatedBitmap = Bitmap.createBitmap(originalSprite, 0, 0, orgWidth, orgHeight, m, true); return rotatedBitmap; The image seems to rotate just fine, but it doesn't scale like I expect it to. I've tried different values, but the image only gets more pixelized as I reduce the scale values, but remains that same size visually; whereas the goal I'm trying to achieve is to actually shrink it visually. I'm not sure what to do at this point, any help is appreciated.

    Read the article

  • Mpi function define

    - by Simone
    I wrote a program in c using MPI (Message Passing Inteface) that compute recursively the inverse of a lower triangular matrix. Every cpu sends 2 submatrices to other two cpus, they compute them and they give them back to the cpu caller. When the cpu caller has its submatrices it has to perform a matrix multiplication. In the recurrence equation the bottle neck is matrix multiplication. I implemented parallel multiplication with mpi in c but i'm not able to embed it into a function. Is it possible? thanks, Simone

    Read the article

  • using Excel VBA, given the daily price of 50 stocks, choose 10 stocks such that they have the minumu

    - by correl
    The high-level goal is to choose 10 stocks that have the lowest correlation among one another, out of a pool of 50, so that I can have a well-diversified portfolio. I have managed to write some VBA macro to download the past 3 years of daily price data from Yahoo finance, and then compute the 50x50 correlation matrix (using the Correl function), using the daily close as the data. What I have tried so far is just some local-maximum heuristic: - For the two stocks that have the highest correlation with each other, remove one of them. Between the two, remove the one that has the higher average correlation with all the other stocks. - When I remove a stock from the pool, I just delete the correponding row and column, to give a smaller matrix. - Repeat until I have just 10 stocks remaining (a 10x10 matrix). I was wondering if there is some algorithm that already solves such a problem and gives the optimum solution?

    Read the article

  • Split a line from a para in java

    - by John
    Hi... I am having problem splitting a para into line.... i need to first read a file then split in into lines if i get a . or ! or ? I also need to remove any extra white space... This is wad i've written so far.... try { Scanner line = new Scanner(mFile); mLine = line.nextLine(); String Matrix[]=mLine.split("\\."); for (int i = 0; i < Matrix.length; ++i) { System.out.println(Matrix[i]); } Input: Hi. My name is John. Who are you ? Output: Hi My name is John Who are you

    Read the article

  • Getting the excluded elements for each of the combn(n,k) combinations

    - by gd047
    Suppose we have generated a matrix A where each column contains one of the combinations of n elements in groups of k. So, its dimensions will be k,choose(n,k). Such a matrix is produced giving the command combn(n,k). What I would like to get is another matrix B with dimensions (n-k),choose(n,k), where each column B[,j] will contain the excluded n-k elements of A[,j]. Here is an example of the way I use tho get table B. Do you think it is a safe method to use? Is there another way? n <- 5 ; k <- 3 (A <- combn(n,k)) (B <- combn(n,n-k)[,choose(n,k):1]) That previous question of mine is part of this problem. Thank you.

    Read the article

  • Howto project a planar polygon on a plane in 3d-space

    - by sum1stolemyname
    I want to project my Polygon along a vector to a plane in 3d Space. I would preferably use a single transformation matrix to do this, but I don't know how to build a matrix of this kind. Given the plane's parameters (ax+by+cz+d), the world coordinates of my Polygon. As stated in the the headline, all vertices of my polygon lie in another plane. the direction vector along which to project my Polygon (currently the polygon's plane's normal vector) goal -a 4x4 transformation matrix which performs the required projection, or some insight on how to construct one myself

    Read the article

  • How to center an R plot

    - by ConnorG
    I'm working on visualizing a matrix in R (almost exactly like http://reference.wolfram.com/mathematica/ref/MatrixPlot.html), and I've been using image(<matrix>,axes=FALSE) to draw the picture. However, I noticed that with the y-axis turned off, the plot isn't centered--the space is still there for the axis ticks + label. I can finagle some centering with par(oma=c(0,0,0,2.5)) but this seems inefficient and error-prone (if my matrix were to change dimensions/become non-square). Is there a better way to force the graphic to center? The right hand margin is significantly smaller than the left.

    Read the article

  • hierarchical clustering on correlations in Python scipy/numpy?

    - by user248237
    How can I run hierarchical clustering on a correlation matrix in scipy/numpy? I have a matrix of 100 rows by 9 columns, and I'd like to hierarchically clustering by correlations of each entry across the 9 conditions. I'd like to use 1-pearson correlation as the distances for clustering. Assuming I have a numpy array "X" that contains the 100 x 9 matrix, how can I do this? I tried using hcluster, based on this example: Y=pdist(X, 'seuclidean') Z=linkage(Y, 'single') dendrogram(Z, color_threshold=0) however, pdist is not what I want since that's euclidean distance. Any ideas? thanks.

    Read the article

  • Hierarchical animations in DirectX and handling seperate animations on the same mesh?

    - by meds
    I have a hierarchical animated model in DirectX which loads and animates based on the following DirectX sample: http://msdn.microsoft.com/en-us/library/ee418677%28VS.85%29.aspx As good as the sample is it does not really go into some of the details of animation that I'd like. For example, if I have a mesh which has a running animation and a throwing animation as seperate animation sets how can I get the throwing animation to occur for bones above the hip and the walking animation to occur for bones underneath the hip? Also if I wanted to for example have the person lean left or right would I simply have to find the bone for the hip and multiplay a rotation matrix by its matrix? In this case I think the matrix is m_amxBoneOffsets?

    Read the article

  • Understanding OpenGL Matrices

    - by Omega
    I'm starting to learn about 3D rendering and I've been making good progress. I've picked up a lot regarding matrices and the general operations that can be performed on them. One thing I'm still not quite following is OpenGL's use of matrices. I see this (and things like it) quite a lot: x y z n ------- 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 So my best understanding, is that it is a normalized (no magnitude) 4 dimensional, column-major matrix. Also that this matrix in particular is called the "identity matrix". Some questions: What is the "nth" dimension? How and when are these applied? My biggest confusion arises from how OpenGL makes use of this kind of data.

    Read the article

  • How to center an R plot after removing axis labels

    - by ConnorG
    I'm working on visualizing a matrix in R (almost exactly like http://reference.wolfram.com/mathematica/ref/MatrixPlot.html), and I've been using image(<matrix>,axes=FALSE) to draw the picture. However, I noticed that with the y-axis turned off, the plot isn't centered--the space is still there for the axis ticks + label. I can finagle some centering with par(oma=c(0,0,0,2.5)) but this seems inefficient and error-prone (if my matrix were to change dimensions/become non-square). Is there a better way to force the graphic to center? The right hand margin is significantly smaller than the left.

    Read the article

  • Scaling larger Image problem.

    - by krishna
    Hi, I m developing flex application, in which I want to Draw Image from User local hard-drive to the canvas of size 640x360. User can choose Image of bigger resolution & is scaled to Canvas size. But if user selected images of larger resolution like 3000x2000, the scaling take lot time & freezes the application until scale done. Is there any method to scale image faster or kind of threading can be done? I am using matrix to scale Image as below: var mat:Matrix = new Matrix(); var scalex:Number = canvasScreen.width/content.width; var scaley:Number = canvasScreen.height/content.height; mat.scale(scalex,scaley); canvasScreen.graphics.clear(); canvasScreen.graphics.beginBitmapFill(content.bitmapData,mat);

    Read the article

  • WPF Animate a Maxtrix using interpolation

    - by Mark
    Im having a issue with my application that is using touch gestures to scale, translate and rotate my scene. I was using a TransformGroup which contained TranslateTransform, ScaleTransform and a RotateTransform but I could not get the movement correct, it always jumps and skips, so I moved to a MaxtrixTransform which I was able to use much easier to get my scene to be zoomable, rotatable and panable nicely. However, what I later found out was that you cannot animate smoothly (using interpolation) the values of a Matrix, for what reason I have no idea, but its part of the MSDN doco and the properties of the Matrix are not dependency properties anyways... Has anyone had any luck animating a maxtrix to make it smooth? The only idea(s) I have had is to animate a few different, custom DP which all have callbacks that I update the Matrix from OR To convert the Maxtix to a set of Transform object that I then animate and then afterwords convert back. Is there a smarter way to do this?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >