Search Results

Search found 1909 results on 77 pages for 'adjacency matrix'.

Page 31/77 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How do I draw a 2d plane and rotate camara (To be a board) in a 3d XNA game?

    - by Mech0z
    I am trying to create a simple board game, but the 3d part of this is really killing me. From what I can gather I have created a plane, but it never moves even though I turn the camara, but that partially makes sense as I only turn the camara with a 3d model, but in my head that makes 0 sense, in my head if I turn the camara it should affect ALL my models? But with this code the camara only "cares" about the 3d cylinder, the plane is just completely still private void OnDraw(object sender, GameTimerEventArgs e) { SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); foreach (ModelMesh mesh in cylinderModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { //effect.World = Matrix.CreateRotationX((float)e.TotalTime.TotalSeconds * 2); effect.View = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.EnableDefaultLighting(); } mesh.Draw(); } //cameraPosition.Z -= 5.0f; _effect.World = Matrix.CreateRotationZ((MathHelper.ToRadians(((float)e.TotalTime.Milliseconds / 2) % 360))); foreach (EffectPass pass in _effect.CurrentTechnique.Passes) { pass.Apply(); SharedGraphicsDeviceManager.Current.GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, _vertices, 0, 1, VertexPositionColor.VertexDeclaration); } } Is there a way to get the camara to affect all models?

    Read the article

  • WPF Blurry Images - Bitmap Class

    - by Luke
    I am using the following sample at http://blogs.msdn.com/dwayneneed/archive/2007/10/05/blurry-bitmaps.aspx within VB.NET. The code is shown below. I am having a problem when my application loads the CPU is pegging 50-70%. I have determined that the problem is with the Bitmap class. The OnLayoutUpdated() method is calling the InvalidateVisual() continously. This is because some points are not returning as equal but rather, Point(0.0,-0.5) Can anyone see any bugs within this code or know a better implmentation for pixel snapping a Bitmap image so it is not blurry? p.s. The sample code was in C#, however I believe that it was converted correctly. Imports System Imports System.Collections.Generic Imports System.Windows Imports System.Windows.Media Imports System.Windows.Media.Imaging Class Bitmap Inherits FrameworkElement ' Use FrameworkElement instead of UIElement so Data Binding works as expected Private _sourceDownloaded As EventHandler Private _sourceFailed As EventHandler(Of ExceptionEventArgs) Private _pixelOffset As Windows.Point Public Sub New() _sourceDownloaded = New EventHandler(AddressOf OnSourceDownloaded) _sourceFailed = New EventHandler(Of ExceptionEventArgs)(AddressOf OnSourceFailed) AddHandler LayoutUpdated, AddressOf OnLayoutUpdated End Sub Public Shared ReadOnly SourceProperty As DependencyProperty = DependencyProperty.Register("Source", GetType(BitmapSource), GetType(Bitmap), New FrameworkPropertyMetadata(Nothing, FrameworkPropertyMetadataOptions.AffectsRender Or FrameworkPropertyMetadataOptions.AffectsMeasure, New PropertyChangedCallback(AddressOf Bitmap.OnSourceChanged))) Public Property Source() As BitmapSource Get Return DirectCast(GetValue(SourceProperty), BitmapSource) End Get Set(ByVal value As BitmapSource) SetValue(SourceProperty, value) End Set End Property Public Shared Function FindParentWindow(ByVal child As DependencyObject) As Window Dim parent As DependencyObject = VisualTreeHelper.GetParent(child) 'Check if this is the end of the tree If parent Is Nothing Then Return Nothing End If Dim parentWindow As Window = TryCast(parent, Window) If parentWindow IsNot Nothing Then Return parentWindow Else ' Use recursion until it reaches a Window Return FindParentWindow(parent) End If End Function Public Event BitmapFailed As EventHandler(Of ExceptionEventArgs) ' Return our measure size to be the size needed to display the bitmap pixels. ' ' Use MeasureOverride instead of MeasureCore so Data Binding works as expected. ' Protected Overloads Overrides Function MeasureCore(ByVal availableSize As Size) As Size Protected Overloads Overrides Function MeasureOverride(ByVal availableSize As Size) As Size Dim measureSize As New Size() Dim bitmapSource As BitmapSource = Source If bitmapSource IsNot Nothing Then Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If Me.VisualParent IsNot Nothing Then Dim window As Window = window.GetWindow(Me.VisualParent) If window IsNot Nothing Then ps = PresentationSource.FromVisual(window.GetWindow(Me.VisualParent)) ElseIf FindParentWindow(Me) IsNot Nothing Then ps = PresentationSource.FromVisual(FindParentWindow(Me)) End If End If ' If ps IsNot Nothing Then Dim fromDevice As Matrix = ps.CompositionTarget.TransformFromDevice Dim pixelSize As New Vector(bitmapSource.PixelWidth, bitmapSource.PixelHeight) Dim measureSizeV As Vector = fromDevice.Transform(pixelSize) measureSize = New Size(measureSizeV.X, measureSizeV.Y) Else measureSize = New Size(bitmapSource.PixelWidth, bitmapSource.PixelHeight) End If End If Return measureSize End Function Protected Overloads Overrides Sub OnRender(ByVal dc As DrawingContext) Dim bitmapSource As BitmapSource = Me.Source If bitmapSource IsNot Nothing Then _pixelOffset = GetPixelOffset() ' Render the bitmap offset by the needed amount to align to pixels. dc.DrawImage(bitmapSource, New Rect(_pixelOffset, DesiredSize)) End If End Sub Private Shared Sub OnSourceChanged(ByVal d As DependencyObject, ByVal e As DependencyPropertyChangedEventArgs) Dim bitmap As Bitmap = DirectCast(d, Bitmap) Dim oldValue As BitmapSource = DirectCast(e.OldValue, BitmapSource) Dim newValue As BitmapSource = DirectCast(e.NewValue, BitmapSource) If ((oldValue IsNot Nothing) AndAlso (bitmap._sourceDownloaded IsNot Nothing)) AndAlso (Not oldValue.IsFrozen AndAlso (TypeOf oldValue Is BitmapSource)) Then RemoveHandler DirectCast(oldValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded RemoveHandler DirectCast(oldValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed -= bitmap._sourceFailed; // 3.5 End If If ((newValue IsNot Nothing) AndAlso (TypeOf newValue Is BitmapSource)) AndAlso Not newValue.IsFrozen Then AddHandler DirectCast(newValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded AddHandler DirectCast(newValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed += bitmap._sourceFailed; // 3.5 End If End Sub Private Sub OnSourceDownloaded(ByVal sender As Object, ByVal e As EventArgs) InvalidateMeasure() InvalidateVisual() End Sub Private Sub OnSourceFailed(ByVal sender As Object, ByVal e As ExceptionEventArgs) Source = Nothing ' setting a local value seems scetchy... RaiseEvent BitmapFailed(Me, e) End Sub Private Sub OnLayoutUpdated(ByVal sender As Object, ByVal e As EventArgs) ' This event just means that layout happened somewhere. However, this is ' what we need since layout anywhere could affect our pixel positioning. Dim pixelOffset As Windows.Point = GetPixelOffset() If Not AreClose(pixelOffset, _pixelOffset) Then InvalidateVisual() End If End Sub ' Gets the matrix that will convert a Windows.Point from "above" the ' coordinate space of a visual into the the coordinate space ' "below" the visual. Private Function GetVisualTransform(ByVal v As Visual) As Matrix If v IsNot Nothing Then Dim m As Matrix = Matrix.Identity Dim transform As Transform = VisualTreeHelper.GetTransform(v) If transform IsNot Nothing Then Dim cm As Matrix = transform.Value m = Matrix.Multiply(m, cm) End If Dim offset As Vector = VisualTreeHelper.GetOffset(v) m.Translate(offset.X, offset.Y) Return m End If Return Matrix.Identity End Function Private Function TryApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean, ByVal throwOnError As Boolean, ByRef success As Boolean) As Windows.Point success = True If v IsNot Nothing Then Dim visualTransform As Matrix = GetVisualTransform(v) If inverse Then If Not throwOnError AndAlso Not visualTransform.HasInverse Then success = False Return New Windows.Point(0, 0) End If visualTransform.Invert() End If Point = visualTransform.Transform(Point) End If Return Point End Function Private Function ApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean) As Windows.Point Dim success As Boolean = True Return TryApplyVisualTransform(Point, v, inverse, True, success) End Function Private Function GetPixelOffset() As Windows.Point Dim pixelOffset As New Windows.Point() Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If ps IsNot Nothing Then Dim rootVisual As Visual = ps.RootVisual ' Transform (0,0) from this element up to pixels. pixelOffset = Me.TransformToAncestor(rootVisual).Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, False) pixelOffset = ps.CompositionTarget.TransformToDevice.Transform(pixelOffset) ' Round the origin to the nearest whole pixel. pixelOffset.X = Math.Round(pixelOffset.X) pixelOffset.Y = Math.Round(pixelOffset.Y) ' Transform the whole-pixel back to this element. pixelOffset = ps.CompositionTarget.TransformFromDevice.Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, True) pixelOffset = rootVisual.TransformToDescendant(Me).Transform(pixelOffset) End If Return pixelOffset End Function Private Function AreClose(ByVal Point1 As Windows.Point, ByVal Point2 As Windows.Point) As Boolean Return AreClose(Point1.X, Point2.X) AndAlso AreClose(Point1.Y, Point2.Y) End Function Private Function AreClose(ByVal value1 As Double, ByVal value2 As Double) As Boolean If value1 = value2 Then Return True End If Dim delta As Double = value1 - value2 Return ((delta < 0.00000153) AndAlso (delta > -0.00000153)) End Function End Class

    Read the article

  • Logic error for Gauss elimination

    - by iwanttoprogram
    Logic error problem with the Gaussian Elimination code...This code was from my Numerical Methods text in 1990's. The code is typed in from the book- not producing correct output... Sample Run: SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS USING GAUSSIAN ELIMINATION This program uses Gaussian Elimination to solve the system Ax = B, where A is the matrix of known coefficients, B is the vector of known constants and x is the column matrix of the unknowns. Number of equations: 3 Enter elements of matrix [A] A(1,1) = 0 A(1,2) = -6 A(1,3) = 9 A(2,1) = 7 A(2,2) = 0 A(2,3) = -5 A(3,1) = 5 A(3,2) = -8 A(3,3) = 6 Enter elements of [b] vector B(1) = -3 B(2) = 3 B(3) = -4 SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS The solution is x(1) = 0.000000 x(2) = -1.#IND00 x(3) = -1.#IND00 Determinant = -1.#IND00 Press any key to continue . . . The code as copied from the text... //Modified Code from C Numerical Methods Text- June 2009 #include <stdio.h> #include <math.h> #define MAXSIZE 20 //function prototype int gauss (double a[][MAXSIZE], double b[], int n, double *det); int main(void) { double a[MAXSIZE][MAXSIZE], b[MAXSIZE], det; int i, j, n, retval; printf("\n \t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS"); printf("\n \t USING GAUSSIAN ELIMINATION \n"); printf("\n This program uses Gaussian Elimination to solve the"); printf("\n system Ax = B, where A is the matrix of known"); printf("\n coefficients, B is the vector of known constants"); printf("\n and x is the column matrix of the unknowns."); //get number of equations n = 0; while(n <= 0 || n > MAXSIZE) { printf("\n Number of equations: "); scanf ("%d", &n); } //read matrix A printf("\n Enter elements of matrix [A]\n"); for (i = 0; i < n; i++) for (j = 0; j < n; j++) { printf(" A(%d,%d) = ", i + 1, j + 1); scanf("%lf", &a[i][j]); } //read {B} vector printf("\n Enter elements of [b] vector\n"); for (i = 0; i < n; i++) { printf(" B(%d) = ", i + 1); scanf("%lf", &b[i]); } //call Gauss elimination function retval = gauss(a, b, n, &det); //print results if (retval == 0) { printf("\n\t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS\n"); printf("\n\t The solution is"); for (i = 0; i < n; i++) printf("\n \t x(%d) = %lf", i + 1, b[i]); printf("\n \t Determinant = %lf \n", det); } else printf("\n \t SINGULAR MATRIX \n"); return 0; } /* Solves the system of equations [A]{x} = {B} using */ /* the Gaussian elimination method with partial pivoting. */ /* Parameters: */ /* n - number of equations */ /* a[n][n] - coefficient matrix */ /* b[n] - right-hand side vector */ /* *det - determinant of [A] */ int gauss (double a[][MAXSIZE], double b[], int n, double *det) { double tol, temp, mult; int npivot, i, j, l, k, flag; //initialization *det = 1.0; tol = 1e-30; //initial tolerance value npivot = 0; //mult = 0; //forward elimination for (k = 0; k < n; k++) { //search for max coefficient in pivot row- a[k][k] pivot element for (i = k + 1; i < n; i++) { if (fabs(a[i][k]) > fabs(a[k][k])) { //interchange row with maxium element with pivot row npivot++; for (l = 0; l < n; l++) { temp = a[i][l]; a[i][l] = a[k][l]; a[k][l] = temp; } temp = b[i]; b[i] = b[k]; b[k] = temp; } } //test for singularity if (fabs(a[k][k]) < tol) { //matrix is singular- terminate flag = 1; return flag; } //compute determinant- the product of the pivot elements *det = *det * a[k][k]; //eliminate the coefficients of X(I) for (i = k; i < n; i++) { mult = a[i][k] / a[k][k]; b[i] = b[i] - b[k] * mult; //compute constants for (j = k; j < n; j++) //compute coefficients a[i][j] = a[i][j] - a[k][j] * mult; } } //adjust the sign of the determinant if(npivot % 2 == 1) *det = *det * (-1.0); //backsubstitution b[n] = b[n] / a[n][n]; for(i = n - 1; i > 1; i--) { for(j = n; j > i + 1; j--) b[i] = b[i] - a[i][j] * b[j]; b[i] = b[i] / a[i - 1][i]; } flag = 0; return flag; } The solution should be: 1.058824, 1.823529, 0.882353 with det as -102.000000 Any insight is appreciated...

    Read the article

  • how to define a field of view for the entire map for shadow?

    - by Mehdi Bugnard
    I recently added "Shadow Mapping" in my XNA games to include shadows. I followed the nice and famous tutorial from "Riemers" : http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php . This code work nice and I can see my source of light and shadow. But the problem is that my light source does not match the field of view that I created. I want the light covers the entire map of my game. I don't know why , but the light only affect 2-3 cubes of my map. ScreenShot: (the emission of light illuminates only 2-3 blocks and not the full map) Here is my code i create the fieldOfView for LightviewProjection Matrix: Vector3 lightDir = new Vector3(10, 52, 10); lightPos = new Vector3(10, 52, 10); Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(105, 50, 105), new Vector3(0, 1, 0)); Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, 1f, 20f, 1000f); lightsViewProjectionMatrix = lightsView * lightsProjection; As you can see , my nearPlane and FarPlane are set to 20f and 100f . So i don't know why the light stop after 2 cubes. it's should be bigger Here is set the value to my custom effect HLSL in the shader file /* SHADOW VALUE */ effectWorld.Parameters["LightDirection"].SetValue(lightDir); effectWorld.Parameters["xLightsWorldViewProjection"].SetValue(Matrix.Identity * .lightsViewProjectionMatrix); effectWorld.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * arcadia.camera.View * arcadia.camera.Projection); effectWorld.Parameters["xLightPower"].SetValue(1f); effectWorld.Parameters["xAmbient"].SetValue(0.3f); Here is my custom HLSL shader effect file "*.fx" // This sample uses a simple Lambert lighting model. float3 LightDirection = normalize(float3(-1, -1, -1)); float3 DiffuseLight = 1.25; float3 AmbientLight = 0.25; uniform const float3 DiffuseColor = 1; uniform const float Alpha = 1; uniform const float3 EmissiveColor = 0; uniform const float3 SpecularColor = 1; uniform const float SpecularPower = 16; uniform const float3 EyePosition; // FOG attribut uniform const float FogEnabled ; uniform const float FogStart ; uniform const float FogEnd ; uniform const float3 FogColor ; float3 cameraPos : CAMERAPOS; texture Texture; sampler Sampler = sampler_state { Texture = (Texture); magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture xShadowMap; sampler ShadowMapSampler = sampler_state { Texture = <xShadowMap>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp; }; /* *************** */ /* SHADOW MAP CODE */ /* *************** */ struct SMapVertexToPixel { float4 Position : POSITION; float4 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; struct SSceneVertexToPixel { float4 Position : POSITION; float4 Pos2DAsSeenByLight : TEXCOORD0; float2 TexCoords : TEXCOORD1; float3 Normal : TEXCOORD2; float4 Position3D : TEXCOORD3; }; struct SScenePixelToFrame { float4 Color : COLOR0; }; float DotProduct(float3 lightPos, float3 pos3D, float3 normal) { float3 lightDir = normalize(pos3D - lightPos); return dot(-lightDir, normal); } SSceneVertexToPixel ShadowedSceneVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.Pos2DAsSeenByLight = mul(inPos, xLightsWorldViewProjection); Output.Normal = normalize(mul(inNormal, (float3x3)World)); Output.Position3D = mul(inPos, World); Output.TexCoords = inTexCoords; return Output; } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.Pos2DAsSeenByLight.x / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; ProjectedTexCoords[1] = -PSIn.Pos2DAsSeenByLight.y / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; float diffuseLightingFactor = 0; if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y)) { float depthStoredInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).r; float realDistance = PSIn.Pos2DAsSeenByLight.z / PSIn.Pos2DAsSeenByLight.w; if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap) { diffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); diffuseLightingFactor = saturate(diffuseLightingFactor); diffuseLightingFactor *= xLightPower; } } float4 baseColor = tex2D(Sampler, PSIn.TexCoords); Output.Color = baseColor*(diffuseLightingFactor + xAmbient); return Output; } SMapVertexToPixel ShadowMapVertexShader(float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightsWorldViewProjection); Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z / PSIn.Position2D.w; return Output; } /* ******************* */ /* END SHADOW MAP CODE */ /* ******************* */ / For rendering without instancing. technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } technique ShadowedScene { /* pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } */ pass Pass1 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } } technique SimpleFog { pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } } I edited my fx file , for show you only information and functions about the shadow ;-)

    Read the article

  • How to draw textures on a model

    - by marc wellman
    The following code is a complete XNA 3.1 program almost unaltered to that code skeleton Visual Studio is creating when creating a new project. The only things I have changed are imported a .x model to the content folder of the VS solution. (the model is a simple square with a texture spanning over it - made in Google Sketchup and exported with several .x exporters) in the Load() method I am loading the .x model into the game. The Draw() method uses a BasicEffect to render the model. Except these three things I haven't added any code. Why does the model does not show the texture ? What can I do to make the texture visible ? This is the texture file (a standard SketchUp texture from the palette): And this is what my program looks like - as you can see: No texture! Find below the complete source code of the program AND the complete .x file: namespace WindowsGame1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } Model newModel; /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: usse this.Content to load your game content here newModel = Content.Load<Model>(@"aau3d"); foreach (ModelMesh mesh in newModel.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { meshPart.Effect = new BasicEffect(this.GraphicsDevice, null); } } } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { if (newModel != null) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[newModel.Bones.Count]; newModel.CopyAbsoluteBoneTransformsTo(transforms); foreach (ModelMesh mesh in newModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.TextureEnabled = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(new Vector3(0, 0, 0)); effect.View = Matrix.CreateLookAt(new Vector3(200, 1000, 200), Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), 0.75f, 1.0f, 10000.0f); } mesh.Draw(); } } base.Draw(gameTime); } } } This is the model I am using (.x): xof 0303txt 0032 // SketchUp 6 -> DirectX (c)2008 edecadoudal, supports: faces, normals and textures Material Default_Material{ 1.0;1.0;1.0;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; } Material _Groundcover_RiverRock_4inch_{ 0.568627450980392;0.494117647058824;0.427450980392157;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; TextureFilename { "aau3d.xGroundcover_RiverRock_4inch.jpg"; } } Mesh mesh_0{ 4; -81.6535;0.0000;74.8031;, -0.0000;0.0000;0.0000;, -81.6535;0.0000;0.0000;, -0.0000;0.0000;74.8031;; 2; 3;0,1,2, 3;1,0,3;; MeshMaterialList { 2; 2; 1, 1; { Default_Material } { _Groundcover_RiverRock_4inch_ } } MeshTextureCoords { 4; -2.1168,-3.4022; 1.0000,-0.0000; 1.0000,-3.4022; -2.1168,-0.0000;; } MeshNormals { 4; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000;; 2; 3;0,1,2; 3;1,0,3;; } }

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • HLSL problem with divide by homogeneous component

    - by Berend
    When I try to divide my position.z by my position.w in HLSL I get as result always 1.0f or higher. Is this a common problem for some reason? When I divide my position.x or y by the w this works fine. But the divide for the z gives a wrong result. I use the view matrix for my camera and the projection matrix as i use it in the game because I want to create a depthmap from the cameraposition. Can anybody explain what I'm doing wrong? Do I need another view matrix?

    Read the article

  • Opinion for my recruitment portal idea [closed]

    - by user1498503
    I am creating a recruitment portal for IT professionals. In this, recruiters while creating a job post would be asked to create a skills requirement matrix. Essential Skills : asp.net MVC Entity Framework Desired Skills : SQL Server 2008 IIS 7.0 On the other hand job seekers would also have their own skills matrix Jobseeker #1 Core Skills : asp.net MVC Entity Framework MangoDB Secondary Skills : SQL Server 2008 IIS 7.0 Jobseeker #2 Core Skills : asp.net Web forms Secondary Skills : SQL Server 2008 IIS 7.0 So when both job seekers apply for the same job. Would it be a good idea for both of them to see each other's skills matrix for comparison?Also no personal details and CVs are shared. I think comparisons would help job seekers to understand what their areas of improvement are and could motivate to fill the skills gap. Your opinion would be appreciated. Regards

    Read the article

  • Collision Detection problems in Voxel Engine (XNA)

    - by Darestium
    I am creating a minecraft like terrain engine in XNA and have had some collision problems for quite some time. I have checked and changed my code based on other peoples collision code and I still have the same problem. It always seems to be off by about a block. for instance, if I walk across a bridge which is one block high I fall through it. Also, if you walk towards a "row" of blocks like this: You are able to stand "inside" the left most one, and you collide with nothing in the right most side (where there is no block and is not visible on this image). Here is all my collision code: private void Move(GameTime gameTime, Vector3 direction) { float speed = playermovespeed * (float)gameTime.ElapsedGameTime.TotalSeconds; Matrix rotationMatrix = Matrix.CreateRotationY(player.Camera.LeftRightRotation); Vector3 rotatedVector = Vector3.Transform(direction, rotationMatrix); rotatedVector.Normalize(); Vector3 testVector = rotatedVector; testVector.Normalize(); Vector3 movePosition = player.position + testVector * speed; Vector3 midBodyPoint = movePosition + new Vector3(0, -0.7f, 0); Vector3 headPosition = movePosition + new Vector3(0, 0.1f, 0); if (!world.GetBlock(movePosition).IsSolid && !world.GetBlock(midBodyPoint).IsSolid && !world.GetBlock(headPosition).IsSolid) { player.position += rotatedVector * speed; } //player.position += rotatedVector * speed; } ... public void UpdatePosition(GameTime gameTime) { player.velocity.Y += playergravity * (float)gameTime.ElapsedGameTime.TotalSeconds; Vector3 footPosition = player.Position + new Vector3(0f, -1.5f, 0f); Vector3 headPosition = player.Position + new Vector3(0f, 0.1f, 0f); // If the block below the player is solid the Y velocity should be zero if (world.GetBlock(footPosition).IsSolid || world.GetBlock(headPosition).IsSolid) { player.velocity.Y = 0; } UpdateJump(gameTime); UpdateCounter(gameTime); ProcessInput(gameTime); player.Position = player.Position + player.velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; velocity = Vector3.Zero; } and the one and only function in the camera class: protected void CalculateView() { Matrix rotationMatrix = Matrix.CreateRotationX(upDownRotation) * Matrix.CreateRotationY(leftRightRotation); lookVector = Vector3.Transform(Vector3.Forward, rotationMatrix); cameraFinalTarget = Position + lookVector; Vector3 cameraRotatedUpVector = Vector3.Transform(Vector3.Up, rotationMatrix); viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector); } which is called when the rotation variables are changed: public float LeftRightRotation { get { return leftRightRotation; } set { leftRightRotation = value; CalculateView(); } } public float UpDownRotation { get { return upDownRotation; } set { upDownRotation = value; CalculateView(); } } World class: public Block GetBlock(int x, int y, int z) { if (InBounds(x, y, z)) { Vector3i regionalPosition = GetRegionalPosition(x, y, z); Vector3i region = GetRegionPosition(x, y, z); return regions[region.X, region.Y, region.Z].Blocks[regionalPosition.X, regionalPosition.Y, regionalPosition.Z]; } return new Block(BlockType.none); } public Vector3i GetRegionPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; return new Vector3i(regionx, regiony, regionz); } public Vector3i GetRegionalPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int X = x % Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int Y = y % Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; int Z = z % Variables.REGION_SIZE_Z; return new Vector3i(X, Y, Z); } Any ideas how to fix this problem? EDIT 1: Graphic of the problem: EDIT 2 GetBlock, Vector3 version: public Block GetBlock(Vector3 position) { int x = (int)Math.Floor(position.X); int y = (int)Math.Floor(position.Y); int z = (int)Math.Ceiling(position.Z); Block block = GetBlock(x, y, z); return block; } Now, the thing is I tested the theroy that the Z is always "off by one" and by ceiling the value it actually works as intended. Altough it still could be greatly more accurate (when you go down holes you can see through the sides, and I doubt it will work with negitive positions). I also does not feel clean Flooring the X and Y values and just Ceiling the Z. I am surely not doing something correctly still.

    Read the article

  • Confusion about Rotation matrices from Euler Angles

    - by xEnOn
    I am trying to learn more about Euler Angles so as to help myself in understanding how I can control my camera better in the game. I came across the following formula that converts Euler Angles to rotation matrices: In the equation, I could see that the first matrix from the left is the rotation matrix about x-axis, the second is about y-axis and the third is about z-axis. From my understanding about ordinary matrix transformations, the later transformation is always applied to the right hand side. And if I'm right about this, then the above equation should have a rotation order starting from rotating about z-axis, y-axis, then finally x-axis. But, from the symbols it seems that the rotation order start rotating about x-axis, then y-axis, then finally z-axis. What should the actual order of the rotation be? Also, I am confuse about if the input vector, in this case, would be a row vector on the left, or a column vector on the right?

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • How do I convert matrices intended for OpenGL to be compatible for DirectX?

    - by gardian06
    I have finished working through the book "Game Physics Engine Development 2nd Ed" by Millington, and have got it working, but I want to adapt it to work with DirectX. I understand that D3D9+ has the option to use either left handed, or right handed convention, but I am unsure about how to return my matrices to be usable by D3D. The source code gives returning OpenGL column major matrices (the transpose of the working transform matrix shown below), but DirectX is row major. For those unfamiliar for the organization of the matrices used in the book: [r11 r12 r13 t1] [r21 r22 r23 t2] [r31 r32 r33 t3] [ 0 0 0 1] r## meaning the value of that element in the rotation matrix, and t# meaning the translation value. So the question in short is: How do I convert the matrix above to be easily usable by D3D? All of the documentation that I have found simply states that D3D is row major, but not where to put what elements so that it is usable by D3D in terms of rotation, and translation elements.

    Read the article

  • Report Builder 3.0: Adding Matrices to Your Reports

    It is easy to create a basic matrix in Report Builder. However, it takes some practice in order to format and dispay the matrix exactly how you want it. There are a large number of options available to enhance the matrix and Robert Sheldon provides enough information to get you the point where you can experiment easily. Make working with SQL a breezeSQL Prompt 5.3 is the effortless way to write, edit, and explore SQL. It's packed with features such as code completion, script summaries, and SQL reformatting, that make working with SQL a breeze. Try it now.

    Read the article

  • How do I correctly multiply an XMMATRIX by a scalar?

    - by user43129
    Using DirectXMath and its XMMATRIX structure in C++ and Direct X 11, how does one multiply that matrix structure by a single float scalar? I want to implement the operation B = A * f; where A and B are XMMATRIX and f is a float. I found all sorts of functions to multiply a matrix by another matrix or a vector. I found all sorts of functions to construct matrices. I could find no scalar multiplication! Why is there no such function? Is there no use case? Did I miss something? How do I implement scalar multiplication?

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • OpenCV Mat creation memory leak

    - by Royi Freifeld
    My memory is getting full fairly quick once using the next piece of code. Valgrind shows a memory leak, but everything is allocated on stack and (supposed to be) freed once the function ends. void mult_run_time(int rows, int cols) { Mat matrix(rows,cols,CV_32SC1); Mat row_vec(cols,1,CV_32SC1); /* initialize vector and matrix */ for (int col = 0; col < cols; ++col) { for (int row = 0; row < rows; ++row) { matrix.at<unsigned long>(row,col) = rand() % ULONG_MAX; } row_vec.at<unsigned long>(1,col) = rand() % ULONG_MAX; } /* end initialization of vector and matrix*/ matrix*row_vec; } int main() { for (int row = 0; row < 20; ++row) { for (int col = 0; col < 20; ++col) { mult_run_time(row,col); } } return 0; } Valgrind shows that there is a memory leak in line Mat row_vec(cols,1,CV_32CS1): ==9201== 24,320 bytes in 380 blocks are definitely lost in loss record 50 of 50 ==9201== at 0x4026864: malloc (vg_replace_malloc.c:236) ==9201== by 0x40C0A8B: cv::fastMalloc(unsigned int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x41914E3: cv::Mat::create(int, int const*, int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x8048BE4: cv::Mat::create(int, int, int) (mat.hpp:368) ==9201== by 0x8048B2A: cv::Mat::Mat(int, int, int) (mat.hpp:68) ==9201== by 0x80488B0: mult_run_time(int, int) (mat_by_vec_mult.cpp:26) ==9201== by 0x80489F5: main (mat_by_vec_mult.cpp:59) Is it a known bug in OpenCV or am I missing something?

    Read the article

  • Binding navigation property to RadGrid while using EntityDataSource control

    - by Matrix
    I'm new to Entity Framework and I got stuck in an issue while trying to bind a navigation property (foreign key reference) to a dropdownlist. I have Telerik RadGrid control which gets the data using a EntityDataSource control. Here is the model description: Applications: AppId, AppName, ServerId Servers: ServerId, ServerName The Applicaitons.ServerId is a foreign key reference to Servers.ServerId. The RadGrid lists the applications and allows the user to insert/update/delete an application. I want to show the server names as a dropdownlist in edit mode which I'm not able to. . Here is my aspx code: <telerik:RadGrid ID="gridApplications" runat="server" Skin="Sunset" AllowAutomaticInserts="True" AllowAutomaticDeletes="True" AllowPaging="True" AllowAutomaticUpdates="True" AutoGenerateColumns="False" OnItemCreated="gridApplications_ItemCreated" DataSourceID="applicationsEntityDataSource" Width="50%" OnItemInserted="gridApplications_ItemInserted" OnItemUpdated="gridApplications_ItemUpdated" OnItemDeleted="gridApplications_ItemDeleted" GridLines="None"> <MasterTableView CommandItemDisplay="Top" AutoGenerateColumns="False" DataKeyNames="AppId" DataSourceID="applicationsEntityDataSource"> <RowIndicatorColumn> <HeaderStyle Width="20px" /> </RowIndicatorColumn> <ExpandCollapseColumn> <HeaderStyle Width="20px" /> </ExpandCollapseColumn> <Columns> <telerik:GridEditCommandColumn ButtonType="ImageButton" UniqueName="EditCommandColumn" HeaderText="Edit" ItemStyle-Width="10%"> </telerik:GridEditCommandColumn> <telerik:GridButtonColumn CommandName="Delete" Text="Delete" UniqueName="DeleteColumn" ConfirmText="Are you sure you want to delete this application?" ConfirmTitle="Confirm Delete" ConfirmDialogType="Classic" ItemStyle-Width="10%" HeaderText="Delete"> </telerik:GridButtonColumn> <telerik:GridBoundColumn DataField="AppId" UniqueName="AppId" Visible="false" HeaderText="Application Id" ReadOnly="true"> </telerik:GridBoundColumn> <telerik:GridBoundColumn DataField="AppName" UniqueName="AppName" HeaderText="Application Name" MaxLength="30" ItemStyle-Width="40%"> </telerik:GridBoundColumn> <telerik:GridTemplateColumn DataField="ServerId" UniqueName="ServerId" HeaderText="Server Hosted" EditFormColumnIndex="1"> <EditItemTemplate> <asp:DropDownList ID="ddlServerHosted" runat="server" DataTextField="Servers.ServerName" DataValueField="ServerId" Width="40%"> </asp:DropDownList> </EditItemTemplate> </telerik:GridTemplateColumn> </Columns> <EditFormSettings ColumnNumber="2" CaptionDataField="AppId" InsertCaption="Insert New Application" EditFormType="AutoGenerated"> <EditColumn InsertText="Insert record" EditText="Edit application id #:" EditFormColumnIndex="0" UpdateText="Application updated" UniqueName="InsertCommandColumn1" CancelText="Cancel insert" ButtonType="ImageButton"></EditColumn> <FormTableItemStyle Wrap="false" /> <FormTableStyle GridLines="Horizontal" CellPadding="2" CellSpacing="0" Height="110px" Width="110px" /> <FormTableAlternatingItemStyle Wrap="false" /> <FormStyle Width="100%" BackColor="#EEF2EA" /> <FormTableButtonRowStyle HorizontalAlign="Right" /> </EditFormSettings> </MasterTableView> </telerik:RadGrid> <asp:EntityDataSource ID="applicationsEntityDataSource" runat="server" ConnectionString="name=AnalyticsEntities" EnableDelete="True" EntityTypeFilter="Applications" EnableInsert="True" EnableUpdate="True" EntitySetName="Applications" DefaultContainerName="AnalyticsEntities" Include="Servers"> </asp:EntityDataSource> I tried another approach where I replaced the GridTemplateColumn with the following code <telerik:RadComboBox ID="RadComboBox1" DataSourceID="serversEntityDataSource" DataTextField="ServerName" DataValueField="ServerId" AppendDataBoundItems="true" runat="server" > <Items> <telerik:RadComboBoxItem /> </Items> and using a separate EntityDataSource control as follows: <asp:EntityDataSource ID="serversEntityDataSource" runat="server" ConnectionString="name=AnalyticsEntities" EnableDelete="True" EntityTypeFilter="Servers" EnableInsert="True" EnableUpdate="True" EntitySetName="Servers" DefaultContainerName="AnalyticsEntities"> </asp:EntityDataSource> but, I get the following error. Application cannot be inserted. Reason: Entities in 'AnalyticsEntities.Applications' participate in the 'FK_Servers_Applications' relationship. 0 related 'Servers' were found. 1 'Servers' is expected. My question is, how do you bind the navigation property and load the values in the DropDownList/RadComboBox control?

    Read the article

  • Sorting eigenvectors by their eigenvalues (associated sorting)

    - by fbrereto
    I have an unsorted vector of eigenvalues and a related matrix of eigenvectors. I'd like to sort the columns of the matrix with respect to the sorted set of eigenvalues. (e.g., if eigenvalue[3] moves to eigenvalue[2], I want column 3 of the eigenvector matrix to move over to column 2.) I know I can sort the eigenvalues in O(N log N) via std::sort. Without rolling my own sorting algorithm, how do I make sure the matrix's columns (the associated eigenvectors) follow along with their eigenvalues as the latter are sorted?

    Read the article

  • How do I translate this Matlab bsxfun call to R?

    - by claytontstanley
    I would also (fingers crossed) like the solution to work with R Sparse Matrices in the Matrix package. >> A = [1,2,3,4,5] A = 1 2 3 4 5 >> B = [1;2;3;4;5] B = 1 2 3 4 5 >> bsxfun(@times, A, B) ans = 1 2 3 4 5 2 4 6 8 10 3 6 9 12 15 4 8 12 16 20 5 10 15 20 25 >> EDIT: I would like to do a matrix multiplication of these sparse vectors, and return a sparse array: > class(NRowSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > class(NColSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > NRowSums * NColSums (I think) w/o using a non-sparse variable to temporarily store data.

    Read the article

  • Best and simple data structure

    - by anshu
    I am trying to create the below matrix in my vb.net so during processing I can get the match scores for the alphabets, for example: What is the match for A and N?, I will look into my inbuilt matrix and return -2 Similarly, What is the match for P and L?, I will look into my inbuilt matrix and return -3 Please suggest me how to go about it, I was trying to use nested dictionary like this: Dim myNestedDictionary As New Dictionary(Of String, Dictionary(Of String, Integer))() Dim lTempDict As New Dictionary(Of String, Integer) lTempDict.Add("A", 4) myNestedDictionary.Add("A", lTempDict) The other way could be is to read the Matrix from a text based file and then fill the two dimensional array. Thanks.

    Read the article

  • Const operator overloading problems in C++

    - by steigers
    Hello everybody, I'm having trouble with overloading operator() with a const version: #include <iostream> #include <vector> using namespace std; class Matrix { public: Matrix(int m, int n) { vector<double> tmp(m, 0.0); data.resize(n, tmp); } ~Matrix() { } const double & operator()(int ii, int jj) const { cout << " - const-version was called - "; return data[ii][jj]; } double & operator()(int ii, int jj) { cout << " - NONconst-version was called - "; if (ii!=1) { throw "Error: you may only alter the first row of the matrix."; } return data[ii][jj]; } protected: vector< vector<double> > data; }; int main() { try { Matrix A(10,10); A(1,1) = 8.8; cout << "A(1,1)=" << A(1,1) << endl; cout << "A(2,2)=" << A(2,2) << endl; double tmp = A(3,3); } catch (const char* c) { cout << c << endl; } } This gives me the following output: NONconst-version was called - - NONconst-version was called - A(1,1)=8.8 NONconst-version was called - Error: you may only alter the first row of the matrix. How can I achieve that C++ call the const-version of operator()? I am using GCC 4.4.0. Thanks for your help! Sebastian

    Read the article

  • Code Golf: Zigzag pattern scanning

    - by fbrereto
    The Challenge The shortest code by character count that takes a single input integer N (N = 3) and returns an array of indices that when iterated would traverse an NxN matrix according to the JPEG "zigzag" scan pattern. The following is an example traversal over an 8x8 matrix (referenced from here:) Examples (The middle matrix is not part of the input or output, just a representation of the NxN matrix the input represents.) 1 2 3 (Input) 3 --> 4 5 6 --> 1 2 4 7 5 3 6 8 9 (Output) 7 8 9 1 2 3 4 (Input) 4 --> 5 6 7 8 --> 1 2 5 9 6 3 4 7 10 13 14 11 8 12 15 16 (Output) 9 10 11 12 13 14 15 16 Notes: The resulting array's base should be appropriate for your language (e.g., Matlab arrays are 1-based, C++ arrays are 0-based). This is related to this question.

    Read the article

  • Wrong getBounds() on LineScaleMode.NONE

    - by ghalex
    I have write a simple example that adds a canvas and draw a rectangle with stroke size 20 scale mode none. The problem is that if I call getBounds() first time I will get a correct result but after I call scale(); the getBounds() function will give me a wrong result. It will take in cosideration the stroke but stroke has scalemode to none and on the screen nothing happens but in the result I will have a x value smaller. Can sombody tell me how can I fix this ? protected var display :Canvas; protected function addCanvas():void { display = new Canvas(); display.x = display.y = 50; display.width = 100; display.height = 100; display.graphics.clear(); display.graphics.lineStyle( 20, 0x000000, 0.5, true, LineScaleMode.NONE ); display.graphics.beginFill( 0xff0000, 1 ); display.graphics.drawRect(0, 0, 100, 100); display.graphics.endFill(); area.addChild( display ); traceBounce(); } protected function scale():void { var m :Matrix = display.transform.matrix; var apply :Matrix = new Matrix(); apply.scale( 2, 1 ); apply.concat( m ); display.transform.matrix = apply; traceBounce(); } protected function traceBounce():void { trace( display.getBounds( this ) ); }

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >