Search Results

Search found 2180 results on 88 pages for 'projection matrix'.

Page 35/88 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Zoom on multiple areas in d3.js

    - by t2k32316
    I'm planning to have a geoJSON map inside my svg alongside other svg elements. I would like to be able to zoom (zoom+pan) in the map and keep the map in the same location with a bounding box. I can accomplish this by using a clipPath to keep the map within a rectangular area. The problem is that I also want to enable zooming and panning on my entire svg. If I do d3.select("svg").call(myzoom); this overrides any zoom I applied to my map. How can I apply zoom to both my entire svg and to my map? That is, I want to be able to zoom+pan on my map when my mouse is in the map's bounding box, and when the mouse is outside the bounding box, zoom+pan on the entire svg. Here's example code: http://bl.ocks.org/nuernber/aeaac0e8edcf7ca93ade. (how do I get around the cross domain issue to load the map?) <svg id="svg" width="640" height="480" xmlns="http://www.w3.org/2000/svg" version="1.1"> <defs> <clipPath id="rectClip"> <rect x="150" y="25" width="400" height="400" style="stroke: gray; fill: none;"/> </clipPath> </defs> <g id="outer_group"> <circle cx="100" cy="50" r="40" stroke="black" stroke-width="2" fill="red" /> <g id="svg_map" style="clip-path: url(#rectClip);"> </g> </g> </svg><br/> <script type="text/javascript"> var svg = d3.select("#svg_map"); var mapGroup = svg.append("g"); var projection = d3.geo.mercator(); var path = d3.geo.path().projection(projection); var zoom = d3.behavior.zoom() .translate(projection.translate()) .scale(projection.scale()) .on("zoom", zoomed); mapGroup.call(zoom); var pan = d3.behavior.zoom() .on("zoom", panned); d3.select("svg").call(pan); mapGroup.attr("transform", "translate(200,0) scale(2,2)"); d3.json("ne_110m_admin_0_countries/ne_110m_admin_0_countries.geojson", function(collection) { mapGroup.selectAll("path").data(collection.features) .enter().append("path") .attr("d", path) .attr("id", function(d) { return d.properties.name.replace(/\s+/g, "")}) .style("fill", "gray").style("stroke", "white").style("stroke-width",1); } ); function panned() { var x = d3.event.translate[0]; var y = d3.event.translate[1]; d3.select("#outer_group").attr("transform", "translate("+x+","+y+") scale(" + d3.event.scale + ")"); } function zoomed() { previousScale = d3.event.scale; projection.translate(d3.event.translate).scale(d3.event.scale); translationOffset = d3.event.translate; mapGroup.selectAll("path").attr("d", path); } </script>

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • My frustum culling is culling from the wrong point

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] /= length; m_Planes[side][1] /= length; m_Planes[side][2] /= length; m_Planes[side][3] /= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } What am i doing wrong?

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • OpenCV Mat creation memory leak

    - by Royi Freifeld
    My memory is getting full fairly quick once using the next piece of code. Valgrind shows a memory leak, but everything is allocated on stack and (supposed to be) freed once the function ends. void mult_run_time(int rows, int cols) { Mat matrix(rows,cols,CV_32SC1); Mat row_vec(cols,1,CV_32SC1); /* initialize vector and matrix */ for (int col = 0; col < cols; ++col) { for (int row = 0; row < rows; ++row) { matrix.at<unsigned long>(row,col) = rand() % ULONG_MAX; } row_vec.at<unsigned long>(1,col) = rand() % ULONG_MAX; } /* end initialization of vector and matrix*/ matrix*row_vec; } int main() { for (int row = 0; row < 20; ++row) { for (int col = 0; col < 20; ++col) { mult_run_time(row,col); } } return 0; } Valgrind shows that there is a memory leak in line Mat row_vec(cols,1,CV_32CS1): ==9201== 24,320 bytes in 380 blocks are definitely lost in loss record 50 of 50 ==9201== at 0x4026864: malloc (vg_replace_malloc.c:236) ==9201== by 0x40C0A8B: cv::fastMalloc(unsigned int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x41914E3: cv::Mat::create(int, int const*, int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x8048BE4: cv::Mat::create(int, int, int) (mat.hpp:368) ==9201== by 0x8048B2A: cv::Mat::Mat(int, int, int) (mat.hpp:68) ==9201== by 0x80488B0: mult_run_time(int, int) (mat_by_vec_mult.cpp:26) ==9201== by 0x80489F5: main (mat_by_vec_mult.cpp:59) Is it a known bug in OpenCV or am I missing something?

    Read the article

  • Binding navigation property to RadGrid while using EntityDataSource control

    - by Matrix
    I'm new to Entity Framework and I got stuck in an issue while trying to bind a navigation property (foreign key reference) to a dropdownlist. I have Telerik RadGrid control which gets the data using a EntityDataSource control. Here is the model description: Applications: AppId, AppName, ServerId Servers: ServerId, ServerName The Applicaitons.ServerId is a foreign key reference to Servers.ServerId. The RadGrid lists the applications and allows the user to insert/update/delete an application. I want to show the server names as a dropdownlist in edit mode which I'm not able to. . Here is my aspx code: <telerik:RadGrid ID="gridApplications" runat="server" Skin="Sunset" AllowAutomaticInserts="True" AllowAutomaticDeletes="True" AllowPaging="True" AllowAutomaticUpdates="True" AutoGenerateColumns="False" OnItemCreated="gridApplications_ItemCreated" DataSourceID="applicationsEntityDataSource" Width="50%" OnItemInserted="gridApplications_ItemInserted" OnItemUpdated="gridApplications_ItemUpdated" OnItemDeleted="gridApplications_ItemDeleted" GridLines="None"> <MasterTableView CommandItemDisplay="Top" AutoGenerateColumns="False" DataKeyNames="AppId" DataSourceID="applicationsEntityDataSource"> <RowIndicatorColumn> <HeaderStyle Width="20px" /> </RowIndicatorColumn> <ExpandCollapseColumn> <HeaderStyle Width="20px" /> </ExpandCollapseColumn> <Columns> <telerik:GridEditCommandColumn ButtonType="ImageButton" UniqueName="EditCommandColumn" HeaderText="Edit" ItemStyle-Width="10%"> </telerik:GridEditCommandColumn> <telerik:GridButtonColumn CommandName="Delete" Text="Delete" UniqueName="DeleteColumn" ConfirmText="Are you sure you want to delete this application?" ConfirmTitle="Confirm Delete" ConfirmDialogType="Classic" ItemStyle-Width="10%" HeaderText="Delete"> </telerik:GridButtonColumn> <telerik:GridBoundColumn DataField="AppId" UniqueName="AppId" Visible="false" HeaderText="Application Id" ReadOnly="true"> </telerik:GridBoundColumn> <telerik:GridBoundColumn DataField="AppName" UniqueName="AppName" HeaderText="Application Name" MaxLength="30" ItemStyle-Width="40%"> </telerik:GridBoundColumn> <telerik:GridTemplateColumn DataField="ServerId" UniqueName="ServerId" HeaderText="Server Hosted" EditFormColumnIndex="1"> <EditItemTemplate> <asp:DropDownList ID="ddlServerHosted" runat="server" DataTextField="Servers.ServerName" DataValueField="ServerId" Width="40%"> </asp:DropDownList> </EditItemTemplate> </telerik:GridTemplateColumn> </Columns> <EditFormSettings ColumnNumber="2" CaptionDataField="AppId" InsertCaption="Insert New Application" EditFormType="AutoGenerated"> <EditColumn InsertText="Insert record" EditText="Edit application id #:" EditFormColumnIndex="0" UpdateText="Application updated" UniqueName="InsertCommandColumn1" CancelText="Cancel insert" ButtonType="ImageButton"></EditColumn> <FormTableItemStyle Wrap="false" /> <FormTableStyle GridLines="Horizontal" CellPadding="2" CellSpacing="0" Height="110px" Width="110px" /> <FormTableAlternatingItemStyle Wrap="false" /> <FormStyle Width="100%" BackColor="#EEF2EA" /> <FormTableButtonRowStyle HorizontalAlign="Right" /> </EditFormSettings> </MasterTableView> </telerik:RadGrid> <asp:EntityDataSource ID="applicationsEntityDataSource" runat="server" ConnectionString="name=AnalyticsEntities" EnableDelete="True" EntityTypeFilter="Applications" EnableInsert="True" EnableUpdate="True" EntitySetName="Applications" DefaultContainerName="AnalyticsEntities" Include="Servers"> </asp:EntityDataSource> I tried another approach where I replaced the GridTemplateColumn with the following code <telerik:RadComboBox ID="RadComboBox1" DataSourceID="serversEntityDataSource" DataTextField="ServerName" DataValueField="ServerId" AppendDataBoundItems="true" runat="server" > <Items> <telerik:RadComboBoxItem /> </Items> and using a separate EntityDataSource control as follows: <asp:EntityDataSource ID="serversEntityDataSource" runat="server" ConnectionString="name=AnalyticsEntities" EnableDelete="True" EntityTypeFilter="Servers" EnableInsert="True" EnableUpdate="True" EntitySetName="Servers" DefaultContainerName="AnalyticsEntities"> </asp:EntityDataSource> but, I get the following error. Application cannot be inserted. Reason: Entities in 'AnalyticsEntities.Applications' participate in the 'FK_Servers_Applications' relationship. 0 related 'Servers' were found. 1 'Servers' is expected. My question is, how do you bind the navigation property and load the values in the DropDownList/RadComboBox control?

    Read the article

  • Android - drawing path as overlay on MapView

    - by Rabas
    Hello, I have a class that extends Overlay and implemments Overlay.Snappable. I have overriden its draw method: @Override public void draw(Canvas canvas, MapView mv, boolean shadow) { Projection projection = mv.getProjection(); ArrayList<GeoPoint> geoPoints = new ArrayList<GeoPoint>(); //Creating geopoints - ommited for readability Path p = new Path(); for (int i = 0; i < geoPoints.size(); i++) { if (i == geoPoints.size() - 1) { break; } Point from = new Point(); Point to = new Point(); projection.toPixels(geoPoints.get(i), from); projection.toPixels(geoPoints.get(i + 1), to); p.moveTo(from.x, from.y); p.lineTo(to.x, to.y); } Paint mPaint = new Paint(); mPaint.setStyle(Style.FILL); mPaint.setColor(0xFFFF0000); mPaint.setAntiAlias(true); canvas.drawPath(p, mPaint); super.draw(canvas, mv, shadow); } As you can see, I make a list of points on a map and I want them to form a polygonal shape. Now, the problem is that when I set paint style to be FILL or FILL_AND_STROKE nothing shows up on the screen, but when I set it to be just stroke, and set stroke width, it acctually draws what it is supposed to draw. Now, I looked for solution, but nothing comes up. Can you tell me if I something missed to set in the code itself, or are there some sorts of constraints when drawing on Overlay canvases? Thanks

    Read the article

  • Sorting eigenvectors by their eigenvalues (associated sorting)

    - by fbrereto
    I have an unsorted vector of eigenvalues and a related matrix of eigenvectors. I'd like to sort the columns of the matrix with respect to the sorted set of eigenvalues. (e.g., if eigenvalue[3] moves to eigenvalue[2], I want column 3 of the eigenvector matrix to move over to column 2.) I know I can sort the eigenvalues in O(N log N) via std::sort. Without rolling my own sorting algorithm, how do I make sure the matrix's columns (the associated eigenvectors) follow along with their eigenvalues as the latter are sorted?

    Read the article

  • How do I translate this Matlab bsxfun call to R?

    - by claytontstanley
    I would also (fingers crossed) like the solution to work with R Sparse Matrices in the Matrix package. >> A = [1,2,3,4,5] A = 1 2 3 4 5 >> B = [1;2;3;4;5] B = 1 2 3 4 5 >> bsxfun(@times, A, B) ans = 1 2 3 4 5 2 4 6 8 10 3 6 9 12 15 4 8 12 16 20 5 10 15 20 25 >> EDIT: I would like to do a matrix multiplication of these sparse vectors, and return a sparse array: > class(NRowSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > class(NColSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > NRowSums * NColSums (I think) w/o using a non-sparse variable to temporarily store data.

    Read the article

  • Best and simple data structure

    - by anshu
    I am trying to create the below matrix in my vb.net so during processing I can get the match scores for the alphabets, for example: What is the match for A and N?, I will look into my inbuilt matrix and return -2 Similarly, What is the match for P and L?, I will look into my inbuilt matrix and return -3 Please suggest me how to go about it, I was trying to use nested dictionary like this: Dim myNestedDictionary As New Dictionary(Of String, Dictionary(Of String, Integer))() Dim lTempDict As New Dictionary(Of String, Integer) lTempDict.Add("A", 4) myNestedDictionary.Add("A", lTempDict) The other way could be is to read the Matrix from a text based file and then fill the two dimensional array. Thanks.

    Read the article

  • Const operator overloading problems in C++

    - by steigers
    Hello everybody, I'm having trouble with overloading operator() with a const version: #include <iostream> #include <vector> using namespace std; class Matrix { public: Matrix(int m, int n) { vector<double> tmp(m, 0.0); data.resize(n, tmp); } ~Matrix() { } const double & operator()(int ii, int jj) const { cout << " - const-version was called - "; return data[ii][jj]; } double & operator()(int ii, int jj) { cout << " - NONconst-version was called - "; if (ii!=1) { throw "Error: you may only alter the first row of the matrix."; } return data[ii][jj]; } protected: vector< vector<double> > data; }; int main() { try { Matrix A(10,10); A(1,1) = 8.8; cout << "A(1,1)=" << A(1,1) << endl; cout << "A(2,2)=" << A(2,2) << endl; double tmp = A(3,3); } catch (const char* c) { cout << c << endl; } } This gives me the following output: NONconst-version was called - - NONconst-version was called - A(1,1)=8.8 NONconst-version was called - Error: you may only alter the first row of the matrix. How can I achieve that C++ call the const-version of operator()? I am using GCC 4.4.0. Thanks for your help! Sebastian

    Read the article

  • Code Golf: Zigzag pattern scanning

    - by fbrereto
    The Challenge The shortest code by character count that takes a single input integer N (N = 3) and returns an array of indices that when iterated would traverse an NxN matrix according to the JPEG "zigzag" scan pattern. The following is an example traversal over an 8x8 matrix (referenced from here:) Examples (The middle matrix is not part of the input or output, just a representation of the NxN matrix the input represents.) 1 2 3 (Input) 3 --> 4 5 6 --> 1 2 4 7 5 3 6 8 9 (Output) 7 8 9 1 2 3 4 (Input) 4 --> 5 6 7 8 --> 1 2 5 9 6 3 4 7 10 13 14 11 8 12 15 16 (Output) 9 10 11 12 13 14 15 16 Notes: The resulting array's base should be appropriate for your language (e.g., Matlab arrays are 1-based, C++ arrays are 0-based). This is related to this question.

    Read the article

  • Wrong getBounds() on LineScaleMode.NONE

    - by ghalex
    I have write a simple example that adds a canvas and draw a rectangle with stroke size 20 scale mode none. The problem is that if I call getBounds() first time I will get a correct result but after I call scale(); the getBounds() function will give me a wrong result. It will take in cosideration the stroke but stroke has scalemode to none and on the screen nothing happens but in the result I will have a x value smaller. Can sombody tell me how can I fix this ? protected var display :Canvas; protected function addCanvas():void { display = new Canvas(); display.x = display.y = 50; display.width = 100; display.height = 100; display.graphics.clear(); display.graphics.lineStyle( 20, 0x000000, 0.5, true, LineScaleMode.NONE ); display.graphics.beginFill( 0xff0000, 1 ); display.graphics.drawRect(0, 0, 100, 100); display.graphics.endFill(); area.addChild( display ); traceBounce(); } protected function scale():void { var m :Matrix = display.transform.matrix; var apply :Matrix = new Matrix(); apply.scale( 2, 1 ); apply.concat( m ); display.transform.matrix = apply; traceBounce(); } protected function traceBounce():void { trace( display.getBounds( this ) ); }

    Read the article

  • Flash "visible" issue

    - by justkevin
    I'm writing a tool in Flex that lets me design composite sprites using layered bitmaps and then "bake" them into a low overhead single bitmapData. I've discovered a strange behavior I can't explain: toggling the "visible" property of my layers works twice for each layer (i.e., I can turn it off, then on again) and then never again for that layer-- the layer stays visible from that point on. If I override "set visible" on the layer as such: override public function set visible(value:Boolean):void { if(value == false) this.alpha = 0; else {this.alpha = 1;} } The problem goes away and I can toggle "visibility" as much as I want. Any ideas what might be causing this? Edit: Here is the code that makes the call: private function onVisibleChange():void { _layer.visible = layerVisible.selected; changed(); } The changed() method "bakes" the bitmap: public function getBaked():BitmapData { var w:int = _composite.width + (_atmosphereOuterBlur * 2); var h:int = _composite.height + (_atmosphereOuterBlur * 2); var bmpData:BitmapData = new BitmapData(w,h,true,0x00000000); var matrix:Matrix = new Matrix(); var bounds:Rectangle = this.getBounds(this); matrix.translate(w/2,h/2); bmpData.draw(this,matrix,null,null,new Rectangle(0,0,w,h),true); return bmpData; } Incidentally, while the layer is still visible, using the Flex debugger I can verify that the layer's visible value is "false".

    Read the article

  • Resize image while rotating image in android

    - by dhams
    Hi every one ,,m working with android project in which i want to rotate image along with touch to some fix pivot point ,,,i have completed all the things but i have facing one problem ,,,while m trying to rotate image the image bitmap is resize....i dont have any idea why it occur ....if somebody have den please give me idea to come over this problem.... my code is below ..... package com.demo.rotation; import android.app.Activity; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Matrix; import android.os.Bundle; import android.util.Log; import android.view.MotionEvent; import android.view.View; import android.view.View.OnTouchListener; import android.widget.ImageView; import android.widget.ImageView.ScaleType; public class temp extends Activity{ ImageView img1; float startX; float startX2 ; Bitmap source; Bitmap bitmap1 = null; double r; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); img1 = (ImageView) findViewById(R.id.img1); img1.setOnTouchListener(img1TouchListener); bitmap1 = BitmapFactory.decodeResource(getResources(), R.drawable.orsl_circle_transparent); } private OnTouchListener img1TouchListener = new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { switch (event.getAction()) { case MotionEvent.ACTION_MOVE: Log.d("MOVE", "1"); if(source!=null) r = Math.atan2(event.getX() - source.getWidth(), (source.getHeight() / 2) - event.getY()); Log.i("startX" + event.getX(), "startY" + event.getY()); rotate(r,bitmap1, img1); img1.setScaleType(ScaleType.CENTER); break; case MotionEvent.ACTION_DOWN: break; case MotionEvent.ACTION_UP: break; default : break; } return true; } }; private void rotate(double r , Bitmap currentBitmap ,ImageView imageView ) { int rotation = (int) Math.toDegrees(r); Matrix matrix = new Matrix(); matrix.setRotate(rotation, currentBitmap.getWidth()/2, currentBitmap.getHeight()/2); source = Bitmap.createBitmap(currentBitmap, 0, 0, currentBitmap.getWidth(), currentBitmap.getHeight(), matrix, false); imageView.setImageBitmap(source); Log.i("HIGHT OF CURRENT BITMAP", ""+source.getHeight()); } }

    Read the article

  • R library for discrete Markov chain simulation

    - by stevejb
    Hello, I am looking for something like the 'msm' package, but for discrete Markov chains. For example, if I had a transition matrix defined as such Pi <- matrix(c(1/3,1/3,1/3, 0,2/3,1/6, 2/3,0,1/2)) for states A,B,C. How can I simulate a Markov chain according to that transition matrix? Thanks,

    Read the article

  • pyplot: really slow creating heatmaps

    - by cvondrick
    I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm. My code is along the lines: import numpy import matplotlib.pyplot as plt for i in range(200): matrix = complex_calculation() plt.set_cmap("gray") plt.imshow(matrix) plt.savefig("frame{0}.png".format(i)) The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower. Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • What is the best way to flag some elements in MATLAB? using NaN or Inf? or something else?

    - by Kamran Bigdely Shamloo
    As you may know, in many occasions, there is a need to flag some elements of a matrix. For example when we have weighted adjacency matrix, and our graph is not fully connected, we have to flag some elements to show that there is no edge between those nodes. The question is how to do that? Is it better to put NaN or Inf on that elements in the matrix? or something elese(such as -1)?

    Read the article

  • Calculating rotation and translation matrices between two odometry positions for monocular linear triangulation

    - by user1298891
    Recently I've been trying to implement a system to identify and triangulate the 3D position of an object in a robotic system. The general outline of the process goes as follows: Identify the object using SURF matching, from a set of "training" images to the actual live feed from the camera Move/rotate the robot a certain amount Identify the object using SURF again in this new view Now I have: a set of corresponding 2D points (same object from the two different views), two odometry locations (position + orientation), and camera intrinsics (focal length, principal point, etc.) since it's been calibrated beforehand, so I should be able to create the 2 projection matrices and triangulate using a basic linear triangulation method as in Hartley & Zissermann's book Multiple View Geometry, pg. 312. Solve the AX = 0 equation for each of the corresponding 2D points, then take the average In practice, the triangulation only works when there's almost no change in rotation; if the robot even rotates a slight bit while moving (due to e.g. wheel slippage) then the estimate is way off. This also applies for simulation. Since I can only post two hyperlinks, here's a link to a page with images from the simulation (on the map, the red square is simulated robot position and orientation, and the yellow square is estimated position of the object using linear triangulation.) So you can see that the estimate is thrown way off even by a little rotation, as in Position 2 on that page (that was 15 degrees; if I rotate it any more then the estimate is completely off the map), even in a simulated environment where a perfect calibration matrix is known. In a real environment when I actually move around with the robot, it's worse. There aren't any problems with obtaining point correspondences, nor with actually solving the AX = 0 equation once I compute the A matrix, so I figure it probably has to do with how I'm setting up the two camera projection matrices, specifically how I'm calculating the translation and rotation matrices from the position/orientation info I have relative to the world frame. How I'm doing that right now is: Rotation matrix is composed by creating a 1x3 matrix [0, (change in orientation angle), 0] and then converting that to a 3x3 one using OpenCV's Rodrigues function Translation matrix is composed by rotating the two points (start angle) degrees and then subtracting the final position from the initial position, in order to get the robot's straight and lateral movement relative to its starting orientation Which results in the first projection matrix being K [I | 0] and the second being K [R | T], with R and T calculated as described above. Is there anything I'm doing really wrong here? Or could it possibly be some other problem? Any help would be greatly appreciated.

    Read the article

  • Flex/Actionscript image display problem.

    - by IanH
    I'm trying to extend the Image class but hit a problem that I can't get past. I have a private image (img) that loads an image and a function that takes that image and copies it onto the parent. The debug function "copyit2" displays the image fine (so I know it's loaded OK). But the function "copyit" doesn't work - it just displays a white rectangle. I can't see how to make copyit work so that the original image is copied to the BitmapData and then subsequenty copied onto the parent? (The idea is to do some processing on the Bitmap data before it is displayed, although this isn't shown here to keep the example simple.) I suspect it is something to do with the security of loading images, but I'm loading it from the same server as the application is run from - so this shouldn't be a problem? Thanks for any help anyone can provide. Ian package zoomapackage { import flash.display.Bitmap; import flash.display.BitmapData; import flash.display.Sprite; import flash.events.MouseEvent; import flash.geom.Matrix; import flash.geom.Point; import flash.geom.Rectangle; import flash.net.*; import mx.controls.Image; import mx.events.FlexEvent; public dynamic class Zooma extends Image { private var img:Image; public function copyit():void { var imgObj:BitmapData = new BitmapData(img.content.width, img.content.height, false); imgObj.draw(img); var matrix:Matrix = new Matrix(); this.graphics.beginBitmapFill(imgObj, matrix, false,true); this.graphics.drawRect(0, 0, this.width , this.height); this.graphics.endFill(); } public function copyit2():void { this.source = img.source; } public function Zooma() { super(); img = new Image(); img.load("http://localhost/Koala.jpg"); } } }

    Read the article

  • Architecture Guidance for designing Workflow Foundation with WCF

    - by Matrix
    We are planning to use WF 3.5 with WCF 3.5 and Entity Framework 1.0 for the upcoming major project. I'm looking for guidance on the architecture side. This new application will be based on typical 3-tier architecture as depicted below: Presentation Tier: ASP.NET Web Forms 3.5 Business Tier: WF 3.5 + BLL's that expose the business logic through WCF service interfaces (using EF for Data Access) Data Tier: SQL Server 2000 Here are the questions: Though the Workflow Foundation has Workflow Services, where we can map the WCF service contracts to a workflow, is this the right way to design the applications? Is EF 1.0 business entities can be used in n-tier apps without sacrificing the tracking changes in the entities? Is there a sample reference application available to look? Thanks.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >