Search Results

Search found 10707 results on 429 pages for 'scroll position'.

Page 10/429 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How to scroll LI items in a fixed height UL?

    - by Tahir Akram
    Here is my example HTML. And I want to have scroll for my LI items. Which are of 2 levels. Means, I want to apply class on every UL. So how can I do that. By using JQuery or CSS tweaking. PS: I am using this example. <ul id="nav" class="dropdown"> <li class="dir"> Item_Root <ul> <li class="dir"> Item_1_Level <ul> <li>Item_Level_2</li> <li>Item_Level_2</li> <li>Item_Level_2</li> <li>.... up to N items</li> </ul> </li> <li>Item_Level_1</li> <li>Item_Level_1</li> <li>Item_Level_1</li> <li>Item_Level_1</li> <li>.... up to N items</li> </ul> </li> </ul>

    Read the article

  • Sorted exsl:node-set. Return node by it position.

    - by kalininew
    Good afternoon, gentlemen. Help me solve a very simple task. I have a set of nodes <menuList> <mode name="aasdf"/> <mode name="vfssdd"/> <mode name="aswer"/> <mode name="ddffe"/> <mode name="ffrthjhj"/> <mode name="dfdf"/> <mode name="vbdg"/> <mode name="wewer"/> <mode name="mkiiu"/> <mode name="yhtyh"/> and so on... </menuList> I have it sorted now this way <xsl:variable name="rtf"> <xsl:for-each select="//menuList/mode"> <xsl:sort data-type="text" order="ascending" select="@name"/> <xsl:value-of select="@name"/> </xsl:for-each> </xsl:variable> Now I need to get an arbitrary element in the sorted array to the number of its position. I write code <xsl:value-of select="exsl:node-set($rtf)[position() = 3]"/> and get a response error. How to do it right?

    Read the article

  • Jquery tooltip absolute position above a link which is inside paragraph text?

    - by BerggreenDK
    I am trying to retrieve the position of an HTML element inside a paragraph. Eg. a span or anchor. I would also like the width of the element. So that when I hover the object, I can activate/build/show a sort of toolbar/tooltip above the element dynamically. I need it to be dynamically added to exisiting content, so somekinda "search-replace" jQuery thingy that scans the elements within eg. a DIV and then does this for all that matches this "feature". Main problem/question is: How do I retrieve the "current absolute" position of the element I am hovering with the mouse. I dont want the toolbar/tooltip to be following the mouse, but instead it must "snap" to the element its hovering. so I was thinking: "place BOX -20px from current element. Match width.... Possible? is there a jQuery plugin for this already - or? Sample code: <div class="helper"> <h1>headline</h1> <p>Here is some sample text. But <a href="somewhere.htm" class="help help45">this is with an explanation you can hover</a>. <a href="somewhereelse.htm">And this isnt.</a> <ul> <li>We could also do it <a href="somewhere.htm" class="help help32">inside a bullet list</a></li> </ul> </div> The .help is the class triggering the "help" and the .help45 or .help32 is the helpsection needed to be shown (but thats another task later, as I am hoping to retreive the "id" from the "help45" so a serverlookup for id=45 can load the text needed to be shown. Nb. if the page scrolls etc. the helptip still needs to follow the item on the page until closed/hidden.

    Read the article

  • How can I scroll my custom view? I want to see the shapes drawn over the bounds of the screen

    - by antonio Musella
    I have a Custom view ... package nan.salsa.goal.customview; import android.R; import android.content.Context; import android.graphics.Canvas; import android.graphics.drawable.ShapeDrawable; import android.graphics.drawable.shapes.RectShape; import android.util.AttributeSet; import android.util.Log; import android.view.View; public class DayView extends View { private static String TAG="DayView"; private ShapeDrawable mDrawable; public DayView(Context context) { super(context); } public DayView(Context context, AttributeSet attrs) { super(context, attrs); init(); } public DayView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); init(); } public void init() { int x = 10; int y = 10; mDrawable = new ShapeDrawable(new RectShape()); mDrawable.getPaint().setColor(Color.GREEN); mDrawable.setBounds(x, y, x + (width - (x * 2)), y + (height - (y*2))); mDrawable.draw(canvas); for (int i = 1; i < 30; i++) { boxDrawable = new ShapeDrawable(new RectShape()); boxDrawable.setBounds(x + x , y + (100 * i) , x + (width - ((x + x) * 2)), y + (100 * i) + 50); boxDrawable.getPaint().setColor(Color.RED); boxDrawable.draw(canvas); } } @Override protected void onDraw(Canvas canvas) { // TODO Auto-generated method stub super.onDraw(canvas); setBackgroundColor(R.color.black); mDrawable.draw(canvas); } } with this simple configuration file : <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#E06F00"> <nan.salsa.goal.customview.DayView android:id="@+id/dayView" android:layout_height="match_parent" android:layout_width="fill_parent" /> </LinearLayout> In my view I want to scroll to see the shapes drawn over the bounds of the screen .. How I can do it? Regards, Antonio Musella

    Read the article

  • How to scroll in the physical world AndEngine?

    - by Esteban Quintero
    I am using andengine to make a game where a sprite (player) is going up across the stage, this is my world. final Rectangle ground = new Rectangle(0, CAMERA_HEIGHT - 2, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle roof = new Rectangle(0, 0, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle left = new Rectangle(0, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final Rectangle right = new Rectangle(CAMERA_WIDTH - 2, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final FixtureDef wallFixtureDef = PhysicsFactory.createFixtureDef(0, 0.5f, 0.5f); PhysicsFactory.createBoxBody(this.mPhysicsWorld, ground, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, roof, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, left, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, right, BodyType.StaticBody, wallFixtureDef); /* Create two sprits and add it to the scene. */ this.mScene.setBackground(autoParallaxBackground); this.mScene.attachChild(ground); this.mScene.attachChild(roof); this.mScene.attachChild(left); this.mScene.attachChild(right); this.mScene.registerUpdateHandler(this.mPhysicsWorld); The problem is that if the sprite reaches up and hits the wall, as I scroll here?

    Read the article

  • Fixed JavaScript Warning - Pin to Top of Page Using CSS Position [migrated]

    - by nicorellius
    I am new to this site, but it seems like the right place to ask this question. I am working on a noscript chunk of code whereby I do some stuff that includes a <p> at the top of the page that alerts the users that he/she has JavaScript disabled. The end result should look like the Stack Exchange sites when JavaScript is disabled (here is a screenshot of mine - SE looks similar except it is at the very top of the page): I have it working OK, but I would love it if the red bar stayed fixed along the top, upon scrolling. I tried using the position: fixed; method, but it ends up moving the p element and I can't get it to look exactly the same as it does without the position: fixed; modification. I tried fiddling with CSS top and left and other positioning but it doesn't ever look like I want it to. Here is a CSS snippett: <noscript> <style type="text/css"> p. noscript_warning { position: fixed; } </noscript>

    Read the article

  • How to move the object around the screen

    - by Abhishek
    I am trying to move the object around the screen I try this code -(void) move { CGFloat upperLimit = mWinSize.height - (mGunda.contentSize.height / 2.0); CGFloat upperLimit1 = mWinSize.height; CGFloat lowerLimit = (mGunda.contentSize.height / 2.0); CGFloat RightLimit = mWinSize.width - (mGunda.contentSize.width/2.0); CGFloat Right = (mGunda.contentSize.width/2.0); if ( mImageGoingUpward ) { mGunda.position = ccp( mGunda.position.x, mGunda.position.y + 5); if ( mGunda.position.y >= upperLimit ) { mImageGoingUpward = NO; mHori = NO; } } else { mGunda.position = ccp( mGunda.position.x, mGunda.position.y - 5); if ( mGunda.position.y <= lowerLimit ) { mGunda.position = ccp(mGunda.position.x +5, lowerLimit); } if(mGunda.position.x >= RightLimit) { mGunda.position = ccp(mGunda.position.x, mGunda.position.y+10); mHori = YES; } if(mHori) { if(mGunda.position.y >= upperLimit) { mGunda.position = ccp(mGunda.position.x - 5,mGunda.position.y); } } } } } It move the object from bottom to top & top to bottom & bottom to right & right to right top of the screen here is problem I have got It not move to the right top to left side of screen this rotationis not happen. How can I do this

    Read the article

  • Why doesn't my cube hold a position?

    - by Christian Frantz
    I gave up a previous method of creating cubes so I went with a list to hold my cube objects. The list is being populated from an array like so: #region MAP float[,] map = { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0} }; #endregion MAP for (int x = 0; x < mapWidth; x++) { for (int z = 0; z < mapHeight; z++) { cubes.Add(new Cube(device, new Vector3(x, map[x,z], z), Color.Green)); } } The cube follows all the parameters of what I had before. This is just easier to deal with. But when I debug, every cube has a position of (0, 0, 0) and there's just one black cube in the middle of my screen. What could I be doing wrong here? public Vector3 cubePosition { get; set; } public Cube(GraphicsDevice graphicsDevice, Vector3 Position, Color color) { device = graphicsDevice; color = Color.Green; Position = cubePosition; SetUpIndices(); SetUpVerticesArray(); } That's the cube constructor. All variables are being passed correctly I think

    Read the article

  • How to scroll hex tiles?

    - by Chris Evans
    I don't seem to be able to find an answer to this one. I have a map of hex tiles. I wish to implement scrolling. Code at present: drawTilemap = function() { actualX = Math.floor(viewportX / hexWidth); actualY = Math.floor(viewportY / hexHeight); offsetX = -(viewportX - (actualX * hexWidth)); offsetY = -(viewportY - (actualY * hexHeight)); for(i = 0; i < (10); i++) { for(j = 0; j < 10; j++) { if(i % 2 == 0) { x = (hexOffsetX * i) + offsetX; y = j * sourceHeight; } else { x = (hexOffsetX * i) + offsetX; y = hexOffsetY + (j * sourceHeight); } var tileselected = mapone[actualX + i][j]; drawTile(x, y, tileselected); } } } The code I've written so far only handles X movement. It doesn't yet work the way it should do. If you look at my example on jsfiddle.net below you will see that when moving to the right, when you get to the next hex tile along, there is a problem with the X position and calculations that have taken place. It seems it is a simple bit of maths that is missing. Unfortunately I've been unable to find an example that includes scrolling yet. http://jsfiddle.net/hd87E/1/ Make sure there is no horizontal scroll bar then trying moving right using the - right arrow on the keyboard. You will see the problem as you reach the end of the first tile. Apologies for the horrid code, I'm learning! Cheers

    Read the article

  • How do I scroll in the physical world?

    - by Esteban Quintero
    I am using andengine to make a game where a sprite (player) is going up across the stage, this is my world. final Rectangle ground = new Rectangle(0, CAMERA_HEIGHT - 2, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle roof = new Rectangle(0, 0, CAMERA_WIDTH, 2, vertexBufferObjectManager); final Rectangle left = new Rectangle(0, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final Rectangle right = new Rectangle(CAMERA_WIDTH - 2, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager); final FixtureDef wallFixtureDef = PhysicsFactory.createFixtureDef(0, 0.5f, 0.5f); PhysicsFactory.createBoxBody(this.mPhysicsWorld, ground, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, roof, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, left, BodyType.StaticBody, wallFixtureDef); PhysicsFactory.createBoxBody(this.mPhysicsWorld, right, BodyType.StaticBody, wallFixtureDef); /* Create two sprits and add it to the scene. */ this.mScene.setBackground(autoParallaxBackground); this.mScene.attachChild(ground); this.mScene.attachChild(roof); this.mScene.attachChild(left); this.mScene.attachChild(right); this.mScene.registerUpdateHandler(this.mPhysicsWorld); The problem is that if the sprite reaches up and hits the wall, as I scroll here?

    Read the article

  • Android HorizontalScrollView scroll by page

    - by Ionic Walrus
    Hi all, I have implemented a slideshow in my Android app using . This works well except that I want to scroll to next image on a scroll gesture (now it just scrolls past few images before decelerating). I have couldn't find a appropriate way to do this, should I be using a FrameLayout instead ? How do I scroll to the next (or previous) image on scroll gesture ? Any help is appreciated, thanks.

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • How to fix Monogame WP8 Touch Position bug?

    - by Moses Aprico
    Normally below code will result in X:Infinity, Y:Infinity TouchCollection touchState = TouchPanel.GetState(); foreach (TouchLocation t in touchState) { if (t.State == TouchLocationState.Pressed) { vb.ButtonTouched((int)t.Position.X, (int)t.Position.Y); } } Then, I followed this https://github.com/mono/MonoGame/issues/1046 and added below code at the first line in update method. (I still don't know how it's worked, but it fixed the problem) if (_firstUpdate) { typeof(Microsoft.Xna.Framework.Input.Touch.TouchPanel).GetField("_touchScale",System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static).SetValue(null, Vector2.One); _firstUpdate = false; } And then, when I randomly testing something, there are several area that won't read the user touch. The tile with the purple dude is the area which won't receive user input (It don't even detect "Pressed", the TouchCollection.Count = 0) Any idea how to fix this? UPDATE 1 : The second attempt in recompiling The difference is weird. Dunno why the consistent clickable area is just 2/3 area to the left UPDATE 2 : After trying to rotate to landscape and back to portrait to randomly testing, then the outcome become :

    Read the article

  • Google Webmasters tools search queries position

    - by user1592845
    In my website account on Google Webmasters tools, some search queries show average position 1.0. This make me understand that it should be displayed as the first result. When I search for this query I could not able to find my website's page listed as a result?! In some cases I navigate to the third or the fourth result page and I could not find it! What are factors that make my website loss its average position for a search query? and when Google webmasters tools updates their values?

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • matrix 4x4 position data

    - by freefallr
    I understand that a 4x4 matrix holds rotation and position data. The rotation data is held in the 3x3 sub-matrix at the top left of the matrix. The position data is held in the last column of the matrix. e.g. glm::vec3 vParentPos( mParent[3][0], mParent[3][1], mParent[3][2] ); My question is - am I accessing the parent matrix correctly in the example above? I know that opengl uses a different matrix ordering that directx, (row order instead of column order or something), so, should the mParent be accessed as follows instead? glm::vec3 vParentPos( mParent[0][3], mParent[1][3], mParent[2][3] ); thanks!

    Read the article

  • How do I repeat part of an image using background-position and CSS sprites?

    - by thor
    I would like to create some buttons with dynamic width using CSS sprites and background-position but I'm not sure if what I want is possible.. I would like the button to have a left-side, middle, and right-side, with the middle repeating as required. Ideally I would like this to be made up of one image of 11px wide so the left and right sides are both 5px wide and the middle is 1px repeated. Is there some way I can define in CSS to use the one centre pixel of the image and repeat if for the required (unknown) width? Normally I've used two images to achieve similar results - one for the sides and a second image of 1px width for the middle, but if there's some way of combining them into one image I would prefer to use that.

    Read the article

  • OpenGL position from depth is wrong

    - by CoffeeandCode
    My engine is currently implemented using a deferred rendering technique, and today I decided to change it up a bit. First I was storing 5 textures as so: DEPTH24_STENCIL8 - Depth and stencil RGBA32F - Position RGBA10_A2 - Normals RGBA8 x 2 - Specular & Diffuse I decided to minimize it and reconstruct positions from the depth buffer. Trying to figure out what is wrong with my method currently has not been fun :/ Currently I get this: which changes whenever I move the camera... weird Vertex shader really simple #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment shader Where the fun (and not so fun) stuff happens #version 150 uniform sampler2D depth_tex; uniform sampler2D normal_tex; uniform sampler2D diffuse_tex; uniform sampler2D specular_tex; uniform mat4 inv_proj_mat; uniform vec2 nearz_farz; in vec2 uv_f; ... other uniforms and such ... layout(location = 3) out vec4 PostProcess; vec3 reconstruct_pos(){ float z = texture(depth_tex, uv_f).x; vec4 sPos = vec4(uv_f * 2.0 - 1.0, z, 1.0); sPos = inv_proj_mat * sPos; return (sPos.xyz / sPos.w); } void main(){ vec3 pos = reconstruct_pos(); vec3 normal = texture(normal_tex, uv_f).rgb; vec3 diffuse = texture(diffuse_tex, uv_f).rgb; vec4 specular = texture(specular_tex, uv_f); ... do lighting ... PostProcess = vec4(pos, 1.0); // Just for testing } Rendering code probably nothing wrong here, seeing as though it always worked before this->gbuffer->bind(); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Enable(gl::DEPTH_TEST); gl::Enable(gl::CULL_FACE); ... bind geometry shader and draw models and shiz ... gl::Disable(gl::DEPTH_TEST); gl::Disable(gl::CULL_FACE); gl::Enable(gl::BLEND); ... bind textures and lighting shaders shown above then draw each light ... gl::BindFramebuffer(gl::FRAMEBUFFER, 0); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Disable(gl::BLEND); ... bind screen shaders and draw quad with PostProcess texture ... Rinse_and_repeat(); // not actually a function ;) Why are my positions being output like they are?

    Read the article

  • JCarousel scroll method does not always fire

    - by Scott Faisal
    var carousel = jQuery('#mycarousel').data('jcarousel'); var index = carousel.size() + 1; carousel.size(index); var html = '<li> some html </li>'; carousel.add(index, html); carousel.scroll(index, 1); The very last scroll method fires but not always. Is this a bug in JCarousel? The following is the code for the scroll method in JCarousel: /** * Scrolls the carousel to a certain position. * * @method scroll * @return undefined * @param i {Number} The index of the element to scoll to. * @param a {Boolean} Flag indicating whether to perform animation. */ scroll: function(i, a) { if (this.locked || this.animating) return; this.animate(this.pos(i), a); }

    Read the article

  • Rotate 3D Model from a custom position

    - by Nipuna Silva
    I have a 3D Model like above in which i want to rotate it from a given location(pointed in red) but I can only rotate it from the middle. How can I rotate it from a custom point. Edit: I successfully able to rotate the model from the below position by getting the radius of the model and applying it to the world matrix Vector3 point = new Vector3(-radius, 0, 0); world = Matrix.CreateTranslation(-radius, 0, 0); But now I cannot change the position of the object and it always centered in middle of the screen. I think that's because i applied the above code. How can I place it anywhere I want?

    Read the article

  • Oracle VM Moves into Challenger Position in the Latest Gartner Magic Quadrant

    - by Monica Kumar
    Oracle Innovations boost Oracle VM into Challenger Position in Gartner x86 Server Virtualization Infrastructure Magic Quadrant Oracle VM's placement in the just published Gartner x86 Server Virtualization Infrastructure Magic Quadrant affirms the Oracle strategy and is also supported by strong customer momentum gains. Optimizations delivered in Oracle VM releases during this last year along with easy software access and low cost licensing have moved Oracle’s placement into the Challenger quadrant in a very short time. Oracle continues to focus on delivering a strong integrated virtualization with Oracle VM and the managed stack in the following areas: Integrated management with Oracle VM and all layers of the Oracle stack from hardware to virtualization to cloud Application-Driven virtualization with Oracle VM templates for rapid enterprise application deployment Certified Oracle applications on Oracle VM Complete stack solution offering more values to customers Get a copy of the Magic Quadrant for x86 Server Virtualization Infrastructure report to read more about how Oracle VM rapidly moved up in its new position.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >