Search Results

Search found 1507 results on 61 pages for 'coordinates'.

Page 49/61 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Tuple in C# 4.0

    - by Jalpesh P. Vadgama
    C# 4.0 language includes a new feature called Tuple. Tuple provides us a way of grouping elements of different data type. That enables us to use it a lots places at practical world like we can store a coordinates of graphs etc. In C# 4.0 we can create Tuple with Create method. This Create method offer 8 overload like following. So you can group maximum 8 data types with a Tuple. Followings are overloads of a data type. Create(T1)- Which represents a tuple of size 1 Create(T1,T2)- Which represents a tuple of size 2 Create(T1,T2,T3) – Which represents a tuple of size 3 Create(T1,T2,T3,T4) – Which represents a tuple of size 4 Create(T1,T2,T3,T4,T5) – Which represents a tuple of size 5 Create(T1,T2,T3,T4,T5,T6) – Which represents a tuple of size 6 Create(T1,T2,T3,T4,T5,T6,T7) – Which represents a tuple of size 7 Create(T1,T2,T3,T4,T5,T6,T7,T8) – Which represents a tuple of size 8 Following are some example code for tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple); var t = System.Tuple.Create<int, string>(1, "Jalpesh"); Console.WriteLine(t); } } } Following is a output of above as expected. You can also access values insides Tuple with ItemN property. Where N represents particular number of item in tuple. Following is an example of it. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Here you can see I have printed items with Item1,Item2 and Item3 . Following is the output of above code.   Even we can create a nested tuple also following is code for nested tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create(1,"Jalpesh",new Tuple<string,string>("P","Vadgama")); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Following is a output above code as expected. As you can see there are unlimited possibilities we can do lots of things with Tuple. Hope you liked it. Stay tuned for more. Till then Happy Programming!!

    Read the article

  • XNA 4 Deferred Rendering deforms the model

    - by Tomáš Bezouška
    I have a problem when rendering a model of my World - when rendered using BasicEffect, it looks just peachy. Problem is when I render it using deferred rendering. See for yourselves: what it looks like: http://imageshack.us/photo/my-images/690/survival.png/ what it should look like: http://imageshack.us/photo/my-images/521/survival2.png/ (Please ignora the cars, they shouldn't be there. Nothing changes when they are removed) Im using Deferred renderer from www.catalinzima.com/tutorials/deferred-rendering-in-xna/introduction-2/ except very simplified, without the custom content processor. Here's the code for the GBuffer shader: float4x4 World; float4x4 View; float4x4 Projection; float specularIntensity = 0.001f; float specularPower = 3; texture Texture; sampler diffuseSampler = sampler_state { Texture = (Texture); MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; AddressU = Wrap; AddressV = Wrap; }; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; float2 TexCoord : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; float3 Normal : TEXCOORD1; float2 Depth : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TexCoord = input.TexCoord; //pass the texture coordinates further output.Normal = mul(input.Normal,World); //get normal into world space output.Depth.x = output.Position.z; output.Depth.y = output.Position.w; return output; } struct PixelShaderOutput { half4 Color : COLOR0; half4 Normal : COLOR1; half4 Depth : COLOR2; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; output.Color = tex2D(diffuseSampler, input.TexCoord); //output Color output.Color.a = specularIntensity; //output SpecularIntensity output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); //transform normal domain output.Normal.a = specularPower; //output SpecularPower output.Depth = input.Depth.x / input.Depth.y; //output Depth return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } And here are the rendering parts in XNA: public void RednerModel(Model model, Matrix world) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); Game.GraphicsDevice.DepthStencilState = DepthStencilState.Default; Game.GraphicsDevice.BlendState = BlendState.Opaque; Game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; foreach (ModelMesh mesh in model.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { GBufferEffect.Parameters["View"].SetValue(Camera.Instance.ViewMatrix); GBufferEffect.Parameters["Projection"].SetValue(Camera.Instance.ProjectionMatrix); GBufferEffect.Parameters["World"].SetValue(boneTransforms[mesh.ParentBone.Index] * world); GBufferEffect.Parameters["Texture"].SetValue(meshPart.Effect.Parameters["Texture"].GetValueTexture2D()); GBufferEffect.Techniques[0].Passes[0].Apply(); RenderMeshpart(mesh, meshPart); } } } private void RenderMeshpart(ModelMesh mesh, ModelMeshPart part) { Game.GraphicsDevice.SetVertexBuffer(part.VertexBuffer); Game.GraphicsDevice.Indices = part.IndexBuffer; Game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount); } I import the model using the built in content processor for FBX. The FBX is created in 3DS Max. I don't know the exact details of that export, but if you think it might be relevant, I will get them from my collegue who does them. What confuses me though is why the BasicEffect approach works... seems the FBX shouldnt be a problem. Any thoughts? They will be greatly appreciated :)

    Read the article

  • Silverlight Cream for February 06, 2011 -- #1042

    - by Dave Campbell
    In this Issue: Mike Taulty, Timmy Kokke, Laurent Bugnion, Arik Poznanski, Deyan Ginev, Deborah Kurata(-2-), Johnny Tordgeman, Roy Dallal, Jaime Rodriguez, Samuel Jack(-2-), James Ashley. Above the Fold: Silverlight: "Customizing Silverlight properties for Visual Designers" Timmy Kokke WP7: "Back button press when using webbrowser control in WP7" Jaime Rodriguez Expression Blend: "Blend Bits 21–Importing from Photoshop & Illustrator…" Mike Taulty From SilverlightCream.com: Blend Bits 21–Importing from Photoshop & Illustrator… Mike Taulty is up to 21 episodes on his Blend Bits sequence now, and this one is about using Blend's import capability, such as a .psd file with all the layers intact. Customizing Silverlight properties for Visual Designers Timmy Kokke has part 1 of 2 parts on making your Silverlight control properties in design surfaces such as Visual Studio designer or Expression Blend. An error when installing MVVM Light templates for VS10 Express Laurent Bugnion has released a new version of MVVMLight that resolves a problem with VS2010 Express version of the templates... no problem with anything else. Reading RSS items on Windows Phone 7 Arik Poznanski has a post up about reading RSS on a WP7, but better yet, he also has code for a helper class that you can grab, plus explanation of wiring it up. Integrating your Windows Phone unit tests with MSBuild #4: The WP7 Unit Test Application Deyan Ginev has a post up about Telerik's WP7 test app that outputs test results in XML from the emulator so they can be integrated with the MSBuild log. Accessing Data in a Silverlight Application: EF I apprently missed this post by Deborah Kurata last week on bringing data into your Silverlight app via Entity Frameworks... good detailed tutorial in VB and C#. Updating Data in a Silverlight Application: EF In Deborah Kurata's latest post, she is continuing with Entity Frameworks by demonstrating updating to the database... full source code will be produced in a later post. Fun with Silverlight and SharePoint 2010 Ribbon Control - Part 2 - An In Depth Look At The Ribbon Control Johnny Tordgeman has Part 2 of his Silverlight and Sharepoint 2010 Ribbon up... taking a deep-dive into the ribbon... great explanation of the attributes, code included. Geographic Coordinates Systems Roy Dallal has some Geo code up that's not necessarily Silverlight, but very cool if you're doing any GIS programming... ya gotta know the coordinate systems! Back button press when using webbrowser control in WP7 Jaime Rodriguez has a post up discussing the much-lamented back-button action in the certification requirements and how to deal with that in a web browser app. Multiplayer-enabling my Windows Phone 7 game: Day 1 Samuel Jack challenged himself to build a WP7 game in 3 days... now he's challenging himself to make it multiplayer in 3 days... this first hour-to-hour post is research of networking and an azure server-side solution. Multiplayer-enabling my Windows Phone 7 game: Day 2–Building a UI with XPF Day 2 for Samuel Jack getting the multiplayer portion of his game working in 3 days.. this day involves getting up-to-speed with XPF. How to Hotwire your WP7 Phone Battery Did you realize if you run your WP7 battery completely down that you can't charge it? James Ashley reports that circumstance, and how he resolved it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Strange Happenings

    - by MOSSLover
    There are weeks we go about our life thinking nothing is going to change nothing will happen.  Then there are other weeks a billion things happen at once.  Friday started off very weird for me.  I flew into Atlanta and I met some cool people for another SharePoint event.  I had some good conversations.  Saturday then hit me and my virtual machine bombed in my presentation after the auto updater ran.  I was writing code on the board and describing everything in notepad.  I would say as presentations go it was the best and the worst presentation all wrapped into one.  The next day I was in Baltimore and I hung out with my aunt which was relatively uneventful and great.  Then Monday hit and half my presentations failed or succeeded and my screen freezes so I start describing the code.  I was on top of my game until Monday night.  On top of the world.  I'm exhausted I get into Raleigh and one of the craziest stories of my life happens.  So my boss has been renting cars through Priceline this week I got a different company than the other weeks. The company gives me a Ford Focus and I plug in the coordinates on my IPhone where I want go.  I head out and then I get to the destination hotel (or I thought I did). I go inside it's the wrong hotel the other one is a few miles away.  I walk outside hop into the car and it sounds like a gunshot.  Nothing is starting...Am I doing something wrong?  No I'm not the car is completely dead in the water.  I call the rental car facility and they tell me to call roadside they are closing for the night.  Roadside says they can't give me a new car but they can get me a jump then I have to take it up with the facility.  They send me a tow truck to give me a jump the guy can't jump the car.  He tells me this vehicle was towed about an hour ago.  He shows me a copy of a slip from when he towed it.  We also notice the rental car company left one of there price scanning guns in the vehicle.  I call up roadside and now they are interested in getting me a car because I need to be onsite tomorrow.  They get the manager of the facility on the phone he apologizes profusely and he says he'll be there in 10 minutes.  About 30 minutes pass and him plus another dude show up with a Ford Escape leather interior.  At this point I hand him the gun tell him someone left it in the vehicle and that I'm not so happy with them.  I ask them to comp my rental they can't due to Priceline, however if I call him again this week he can get me a voucher.  It's about 2 am and I'm ready to get to the hotel I don't make it in the next morning until 10 am.  I would say this was a crazy week all forms of technology are trying to tell me something.  What I have no idea, but we'll see the outcome soon.  I feel so weird tons of change is about to happen.  I don't know if it's good or bad.  I think this week is some form of omen.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • How can I draw an arrow at the edge of the screen pointing to an object that is off screen?

    - by Adam Henderson
    I am wishing to do what is described in this topic: http://www.allegro.cc/forums/print-thread/283220 I have attempted a variety of the methods mentioned here. First I tried to use the method described by Carrus85: Just take the ratio of the two triangle hypontenuses (doesn't matter which triagle you use for the other, I suggest point 1 and point 2 as the distance you calculate). This will give you the aspect ratio percentage of the triangle in the corner from the larger triangle. Then you simply multiply deltax by that value to get the x-coordinate offset, and deltay by that value to get the y-coordinate offset. But I could not find a way to calculate how far the object is away from the edge of the screen. I then tried using ray casting (which I have never done before) suggested by 23yrold3yrold: Fire a ray from the center of the screen to the offscreen object. Calculate where on the rectangle the ray intersects. There's your coordinates. I first calculated the hypotenuse of the triangle formed by the difference in x and y positions of the two points. I used this to create a unit vector along that line. I looped through that vector until either the x coordinate or the y coordinate was off the screen. The two current x and y values then form the x and y of the arrow. Here is the code for my ray casting method (written in C++ and Allegro 5) void renderArrows(Object* i) { float x1 = i->getX() + (i->getWidth() / 2); float y1 = i->getY() + (i->getHeight() / 2); float x2 = screenCentreX; float y2 = ScreenCentreY; float dx = x2 - x1; float dy = y2 - y1; float hypotSquared = (dx * dx) + (dy * dy); float hypot = sqrt(hypotSquared); float unitX = dx / hypot; float unitY = dy / hypot; float rayX = x2 - view->getViewportX(); float rayY = y2 - view->getViewportY(); float arrowX = 0; float arrowY = 0; bool posFound = false; while(posFound == false) { rayX += unitX; rayY += unitY; if(rayX <= 0 || rayX >= screenWidth || rayY <= 0 || rayY >= screenHeight) { arrowX = rayX; arrowY = rayY; posFound = true; } } al_draw_bitmap(sprite, arrowX - spriteWidth, arrowY - spriteHeight, 0); } This was relatively successful. Arrows are displayed in the bottom right section of the screen when objects are located above and left of the screen as if the locations of the where the arrows are drawn have been rotated 180 degrees around the center of the screen. I assumed this was due to the fact that when I was calculating the hypotenuse of the triangle, it would always be positive regardless of whether or not the difference in x or difference in y is negative. Thinking about it, ray casting does not seem like a good way of solving the problem (due to the fact that it involves using sqrt() and a large for loop). Any help finding a suitable solution would be greatly appreciated, Thanks Adam

    Read the article

  • Detect multitouch (two fingers touch) on a sprite to apply pinch zoom behaviour

    - by Tahreem
    I am using andengine, want to move, zoom and rotate multiple sprites individually on a scene. I have achieved "move" but for pinch zoom i am unable to get the event to two fingers' touch. Below is the code: public class Main extends SimpleBaseGameActivity { private Camera camera; private BitmapTextureAtlas mBitmapTextureAtlas; private ITextureRegion mFaceTextureRegion; private ITextureRegion mFaceTextureRegion2; Sprite face2; private static final int CAMERA_WIDTH = 800; private static final int CAMERA_HEIGHT = 480; @Override public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new RatioResolutionPolicy( CAMERA_WIDTH, CAMERA_HEIGHT), camera); return engineOptions; } @Override protected void onCreateResources() { BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/"); this.mBitmapTextureAtlas = new BitmapTextureAtlas( this.getTextureManager(), 1024, 1600, TextureOptions.NEAREST); BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(this.mBitmapTextureAtlas, // this, "ui_ball_1.png", 0, 0, 1, 1), // this.getVertexBufferObjectManager()); this.mFaceTextureRegion = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mFaceTextureRegion2 = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mBitmapTextureAtlas.load(); this.mEngine.getTextureManager().loadTexture(this.mBitmapTextureAtlas); } @Override protected Scene onCreateScene() { this.mEngine.registerUpdateHandler(new FPSLogger()); final Scene scene = new Scene(); scene.setBackground(new Background(0.09804f, 0.6274f, 0.8784f)); final float centerX = (CAMERA_WIDTH - this.mFaceTextureRegion .getWidth()) / 2; final float centerY = (CAMERA_HEIGHT - this.mFaceTextureRegion .getHeight()) / 2; final Sprite face = new Sprite(centerX, centerY, this.mFaceTextureRegion, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { this.setPosition(pSceneTouchEvent.getX() - this.getWidth() / 2, pSceneTouchEvent.getY() - this.getHeight() / 2); return true; } }; face.setScale(2); scene.attachChild(face); scene.registerTouchArea(face); face2 = new Sprite(200, 200, this.mFaceTextureRegion2, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { switch(pSceneTouchEvent.getAction()){ case TouchEvent.ACTION_DOWN: int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; for(int i= 0; i <count; i++) { int id = pSceneTouchEvent.getMotionEvent().getPointerId(i); } break; case TouchEvent.ACTION_MOVE: this.setPosition(pSceneTouchEvent.getX() -this.getWidth() / 2, pSceneTouchEvent.getY()-this.getHeight() / 2); break; case TouchEvent.ACTION_UP: break; } return true; } }; face2.setScale(2); scene.attachChild(face2); scene.registerTouchArea(face2); scene.setTouchAreaBindingOnActionDownEnabled(true); return scene; } } This line int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; should set 2 to the count variable if i touch the sprite with to fingers, then i can apply zooming functionality (setScale method) on the sprite by getting the distance between the coordinates of two fingers. Can anyone help me? why it does not detect two fingers? and without this how can i zoom the sprite on pinch of two fingers? I am very new to game development, any help would be appreciated. Thanks in advance.

    Read the article

  • Texture errors in CubeMap

    - by shade4159
    I am trying to apply this texture as a cubemap. This is my result: Clearly I am doing something with my texture coordinates, but I cannot for the life of me figure out what. I don't even see a pattern to the texture fragments. They just seem like a jumble of different faces. Can anyone shed some light on this? Vertex shader: #version 400 in vec4 vPosition; in vec3 inTexCoord; smooth out vec3 texCoord; uniform mat4 projMatrix; void main() { texCoord = inTexCoord; gl_Position = projMatrix * vPosition; } My fragment shader: #version 400 smooth in vec3 texCoord; out vec4 fColor; uniform samplerCube textures void main() { fColor = texture(textures,texCoord); } Vertices of cube: point4 worldVerts[8] = { vec4( 15, 15, 15, 1 ), vec4( -15, 15, 15, 1 ), vec4( -15, 15, -15, 1 ), vec4( 15, 15, -15, 1 ), vec4( -15, -15, 15, 1 ), vec4( 15, -15, 15, 1 ), vec4( 15, -15, -15, 1 ), vec4( -15, -15, -15, 1 ) }; Cube rendering: void worldCube(point4* verts, int& Index, point4* points, vec3* texVerts) { quadInv( verts[0], verts[1], verts[2], verts[3], 1, Index, points, texVerts); quadInv( verts[6], verts[3], verts[2], verts[7], 2, Index, points, texVerts); quadInv( verts[4], verts[5], verts[6], verts[7], 3, Index, points, texVerts); quadInv( verts[4], verts[1], verts[0], verts[5], 4, Index, points, texVerts); quadInv( verts[5], verts[0], verts[3], verts[6], 5, Index, points, texVerts); quadInv( verts[4], verts[7], verts[2], verts[1], 6, Index, points, texVerts); } Backface function (since this is the inside of the cube): void quadInv( const point4& a, const point4& b, const point4& c, const point4& d , int& Index, point4* points, vec3* texVerts) { quad( a, d, c, b, Index, points, texVerts, a.to_3(), b.to_3(), c.to_3(), d.to_3()); } And the quad drawing function: void quad( const point4& a, const point4& b, const point4& c, const point4& d, int& Index, point4* points, vec3* texVerts, const vec3& tex_a, const vec3& tex_b, const vec3& tex_c, const vec3& tex_d) { texVerts[Index] = tex_a.normalized(); points[Index] = a; Index++; texVerts[Index] = tex_b.normalized(); points[Index] = b; Index++; texVerts[Index] = tex_c.normalized(); points[Index] = c; Index++; texVerts[Index] = tex_a.normalized(); points[Index] = a; Index++; texVerts[Index] = tex_c.normalized(); points[Index] = c; Index++; texVerts[Index] = tex_d.normalized(); points[Index] = d; Index++; } Edit: I forgot to mention, in the image, the camera is pointed directly at the back face of the cube. You can kind of see the diagonals leading out of the corners, if you squint.

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • Changes in the Maven Embedded GlassFish plugin

    - by Romain Grecourt
    The plugin changed its Maven coordinates (a.k.a GAV) over time:  version <= 3.1.1 available under org.glassfish:maven-glassfish-embedded-plugin version >= 3.1.2 available under org.glassfish.embedded:maven-glassfish-embedded-plugin The goal “glassfish-embedded:run” has changed its way of reading the deployment configuration in the latest version: 4.0.Projects using previous versions of the plugin will stop working with this goal. Here is an example of the “old behavior”: 1 2 3 4 5 6 7 8 9 10 11 12 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>3.1.2.2</version> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> </plugin> The new behavior is as follow: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>4.0</version> <configuration> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> <executions> <execution> <goals> <goal>deploy</goal> </goals> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> </configuration> </execution> </executions> </plugin> The new version looks for execution of the deploy goal and the associated configuration, when running the goal ‘run’. Both would allow you to run the latest version of the glassfish-embedded jar, you’d only need to add it as a plugin dependency: 1 2 3 4 5 6 7 8 9 10 <plugin> [...] <dependencies> <dependency> <groupId>org.glassfish.main.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>4.0</version> </dependency> </dependencies> </plugin>

    Read the article

  • Memory is full with vertex buffer

    - by Christian Frantz
    I'm having a pretty strange problem that I didn't think I'd run into. I was able to store a 50x50 grid in one vertex buffer finally, in hopes of better performance. Before I had each cube have an individual vertex buffer and with 4 50x50 grids, this slowed down my game tremendously. But it still ran. With 4 50x50 grids with my new code, that's only 4 vertex buffers. With the 4 vertex buffers, I get a memory error. When I load the game with 1 grid, it takes forever to load and with my previous version, it started up right away. So I don't know if I'm storing chunks wrong or what but it stumped me -.- for (int x = 0; x < 50; x++) { for (int z = 0; z < 50; z++) { for (int y = 0; y <= map[x, z]; y++) { SetUpVertices(); SetUpIndices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z] - y, z), grass)); } } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionTexture), vertices.Count(), BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); indexBuffer = new IndexBuffer(device, typeof(short), indices.Count(), BufferUsage.WriteOnly); indexBuffer.SetData(indices.ToArray()); Thats how theyre stored. The array I'm reading from is a byte array which defines the coordinates of my map. Now with my old version, I used the same loading from an array so that hasn't changed. The only difference is the one vertex buffer instead of 2500 for a 50x50 grid. cubes is just a normal list that holds all my cubes for the vertex buffer. Another thing that just came to mind would be my draw calls. If I'm setting an effect for each cube in my cube list, that's probably going to take a lot of memory. How can I avoid doing this? I need the foreach method to set my cubes to the right position foreach (Cube block in cube.cubes) { effect.VertexColorEnabled = false; effect.TextureEnabled = true; Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(1f); Matrix translate = Matrix.CreateTranslation(block.cubePosition); effect.World = center * scale * translate; effect.View = cam.view; effect.Projection = cam.proj; effect.FogEnabled = false; effect.FogColor = Color.CornflowerBlue.ToVector3(); effect.FogStart = 1.0f; effect.FogEnd = 50.0f; cube.Draw(effect); noc++; }

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • Raphael js text positioning: centering text in a circle

    - by j-man86
    Hey everyone, I am using Raphael js to draw circled numbers. The problem is that each number has a different width/height so using one set of coordinates to center the text isn't working. The text displays differently between IE, FF, and safari. Is there a dynamic way to find the height/width of the number and center it accordingly? Here is my test page: http://jesserosenfield.com/fluid/test.html and my code: function drawcircle(div, text) { var paper = Raphael(div, 26, 26); //<< var circle = paper.circle(13, 13, 10.5); circle.attr("stroke", "#f1f1f1"); circle.attr("stroke-width", 2); var text = paper.text(12, 13, text); //<< text.attr({'font-size': 15, 'font-family': 'FranklinGothicFSCondensed-1, FranklinGothicFSCondensed-2'}); text.attr("fill", "#f1f1f1"); } window.onload = function () { drawcircle("c1", "1"); drawcircle("c2", "2"); drawcircle("c3", "3"); }; Thanks very much!

    Read the article

  • Can't create more than one overlay in Seadragon

    - by XGreen
    Hi everyone, I am trying to add overlays to a seadragon map I am making but for some reason that I can not figure our seadragon ignores all my overlays except the first one. Any help with this is much appreciated. var viewer = null; function init() { Seadragon.Config.autoHideControls = false; viewer = new Seadragon.Viewer("container"); viewer.addEventListener("open", addOverlays); viewer.addControl(makeControl(), Seadragon.ControlAnchor.TOP_RIGHT); $(viewer.getNavControl()).parent().parent().css({ 'top': 10, 'right': 10 }); viewer.openDzi("_assets/Mapdata/dzc_output.xml"); } function makeControl() { var control = document.createElement("a"); var controlText = document.createTextNode(""); control.href = "#"; // so browser shows it as link control.className = "control"; control.appendChild(controlText); Seadragon.Utils.addEvent(control, "click", onControlClick); return control; } function onControlClick(event) { Seadragon.Utils.cancelEvent(event); // don't process link if (!viewer.isOpen()) { return; } // These are the coordinates of europe on this map var x = 0.5398693914203284; var y = 0.21155952391206562; var z = 5; viewer.viewport.panTo(new Seadragon.Point(x, y)); viewer.viewport.zoomTo(z); viewer.viewport.ensureVisible(); } function addOverlays(viewer) { drawer = viewer.drawer; var img = document.createElement("img"); img.src = "_assets/Images/pushpin.png"; $(img).addClass('pushPin'); var overlays = [ { elmt: img, point: new Seadragon.Point(0.51, 0.22) }, { elmt: img, point: new Seadragon.Point(0.20, 0.13) } ]; for (var i = 0; i < overlays.length; i++) { drawer.addOverlay(overlays[i].elmt, overlays[i].point); } } Seadragon.Utils.addEvent(window, "load", init);

    Read the article

  • Interpolation and Morphing of an image in labview and/or openCV

    - by Marc
    I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map proj_x, proj_y <-- cam_x, cam_y for scattered point pairs My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows [pgridx, pgridy] = meshgrid(allprojxpts, allprojypts) fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy); fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy); and the reverse for the camera to projector mapping Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle) I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick. So my question is one of the following 1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ? 2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV. Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"

    Read the article

  • iPhone SDK UIScrollView doesn't get touch events after moving it

    - by newbie
    Hi! I'm subclassing UIScrollView and on the start I fill this ShowsScrollView with some items. After filling it, I setup frame and contentSize to this ShowsScrollView. Everything works fine for now, i get touches events, scrolling is working.. But after rotation to landscape, I change x and y coordinates of ShowsScrollView frame, to move it from bottom to top right corner. Then I resize it (change width and height of ShowsScrollView frame) and reorder items in this scroll. At the end I setup new contentSize. Now i get touches event only on first 1/4 of scrollview, scrolling also work only on 1/4 of scrollview, but scroll all items in scrollview. After all actions I write a log: NSLog(@"ViewController: setLandscape finished: size: %f, %f content: %f,%f",scrollView.frame.size.width,scrollView.frame.size.height, scrollView.contentSize.width, scrollView.contentSize.height ); Values are correct: ViewController: setLandscape finished: size: 390.000000, 723.000000 content: 390.000000,950.000000 On rotating back to portrait, I move and resize all thing back and everything works fine.. Please help!

    Read the article

  • How do I MVVM-ize this MouseDown code in my WPF 3D app?

    - by DanM
    In my view, I have: <UserControl x:Class ... MouseDown="UserControl_MouseDown"> <Viewport3D Name="Viewport" Grid.Column="0"> ... </Viewport3D > </UserControl> In my code-behind, I have: private void UserControl_MouseDown(object sender, MouseButtonEventArgs e) { ((MapPanelViewModel)DataContext).OnMouseDown(e, Viewport); } And in my view-model, I have: public void OnMouseDown(MouseEventArgs e, Viewport3D viewport) { var range = new LineRange(); var isValid = ViewportInfo.Point2DtoPoint3D(viewport, e.GetPosition(viewport), out range); if (!isValid) MouseCoordinates = "(no data)"; else { var point3D = range.PointFromZ(0); var point = ViewportInfo.Point3DtoPoint2D(viewport, point3D); MouseCoordinates = e.GetPosition(viewport).ToString() + "\n" + point3D + "\n" + point; } } I really don't have a good sense of how to handle mouse events with MVVM. I always just end up putting them in the code-behind and casting the DataContext as SomeViewModel, then passing the MouseEventArgs on to a handler in my view-model. That's bad enough already, but in this case, I'm actually passing in a control (a Viewport3D), which is necessary for translating coordinates between 2D and 3D. Any suggestions on how to make this more in tune with MVVM?

    Read the article

  • Design approach, string table data, variables, stl memory usage

    - by howieh
    I have an old structure class like this: typedef vector<vector<string>> VARTYPE_T; which works as a single variable. This variable can hold from one value over a list to data like a table. Most values are long,double, string or double [3] for coordinates (x,y,z). I just convert them as needed. The variables are managed in a map like this : map<string,VARTYPE_T *> where the string holds the variable name. Sure, they are wrapped in classes. Also i have a tree of nodes, where each node can hold one of these variablemaps. Using VS 2008 SP1 for this, i detect a lot of memory fragmentation. Checking against the stlport, stlport seemed to be faster (20% ) and uses lesser memory (30%, for my test cases). So the question is: What is the best implementation to solve this requirement with fast an properly used memory ? Should i write an own allocator like a pool allocator. How would you do this ? Thanks in advance, Howie

    Read the article

  • Manual drag-drop operations in Flex

    - by Yarin
    This is a two-part problem: A) I'm implementing several irregular drag-drop operations in Flex (e.g. DataGrid ItemRenderer into Tree). My preference was modifying DragManager operations to meet my needs, and in fact using DragManager allows me to do everything I need, but I'm having serious issues with performance. For example, dragging anything over a many-columned DataGrid, whether the drag was initiated with DragManager.doDrag, or just using native ListBase drag-drop functionality, slows the drag movement to a crawl. Even if the DataGrid is disabled/ not listenening for any move/drag events, this happens. On the other hand, if the drag is initiated by calling .startDrag() on the Sprite, the drag is smooth and performs great over DataGrids and everything else. So part A would be: Is there a reason why .startDrag() operations work so well, while drags initiated through DragManager.doDrag suffer so badly when over certain components? B) If indeed the solution is to handle drag-drops using .startDrag(), how would I go about determining what component the mouse is over when the drag is released? In my example, my dragged object is brought up to the top level of the display list, and so is being moved around in stage coordinates. mouseMove, mouseOver events don't fire on the components I'm dragging over because the mouse is constantly over the dragged component, so I would need some sort of stage.coordinate - visibleComponentAtThatCoordinate conversion. Any thoughts on this? Thanks alot!-- Yarin

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >