Search Results

Search found 5155 results on 207 pages for 'render to texture'.

Page 40/207 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to hide some nodes in Richfaces Tree (do not render nodes by condition)?

    - by VestniK
    I have a tree of categories and courses in my SEAM application. Courses may be active and inactive. I want to be able to show only active or all courses in my tree. I've decided to always build complete tree in my PAGE scope component since building this tree is quite expensive operation. I have boolean flag courseActive in the data wrapped by TreeNode<T>. Now I can't find the way to show courses node only if this flag is true. The best result I've achieved with the following code: <h:outputLabel for="showInactiveCheckbox" value="show all courses: "/> <h:selectBooleanCheckbox id="showInactiveCheckbox" value="#{categoryTreeEditorModel.showAllCoursesInTree}"> <a4j:support event="onchange" reRender="categoryTree"/> </h:selectBooleanCheckbox> <rich:tree id="categoryTree" value="#{categoryTree}" var="item" switchType="ajax" ajaxSubmitSelection="true" reRender="categoryTree,controls" adviseNodeOpened="#{categoryTreeActions.adviseRootOpened}" nodeSelectListener="#{categoryTreeActions.processSelection}" nodeFace="#{item.typeName}"> <rich:treeNode type="Category" icon="..." iconLeaf="..."> <h:outputText value="#{item.title}"/> </rich:treeNode> <rich:treeNode type="Course" icon="..." iconLeaf="..." rendered="#{item.courseActive or categoryTreeEditorModel.showAllCoursesInTree}"> <h:outputText rendered="#{item.courseActive}" value="#{item.title}"/> <h:outputText rendered="#{not item.courseActive}" value="#{item.title}" style="color:#{a4jSkin.inactiveTextColor}"/> </rich:treeNode> </rich:tree> the only problem is if some node is not listed in any rich:treeNode it just still shown with title obtained by Object.toString() method insted of being hidden. Does anybody know how to not show some nodes in the Richfases tree according to some condition?

    Read the article

  • How can I render a custom view in a UIScrollView at varying zoom levels without distorting the view?

    - by eczarny
    The scenario: I have a custom view (a subclass of UIView) that draws a game board. To enable the ability to zoom into, and pan around, the board I added my view as a subview of UIScrollView. This kind of works, but the game board is being rendered incorrectly. Everything is kind of fuzzy, and nothing looks right. The question: How can I force my view to be redrawn correctly ay varying scales? I'm providing my view with the current scale and sending it a setNeedsDisplay message after the scroll view is done zooming in/out, but the game board is still being rendered incorrectly. My view should be redrawing the game board depending on the zoom level, but this isn't happening. Does the scroll view perform a generic transformation on subviews? Is there a way to disable this behavior?

    Read the article

  • How to render the properties of the view model's base class first when using ViewData.ModelMetadata.

    - by Martin R-L
    When I use the ViewData.ModelMetadata.Properties in order to loop the properties (with an additional Where(modelMetadata => modelMetadata.ShowForEdit && !ViewData.TemplateInfo.Visited(modelMetadata))), and thereby create a generic edit view, the properties of the view model's base class are rendered last. Is it possible to use a clever OrderBy() or is there another way to first get the properties of the base class, and then the sub class'? Reverse won't do the trick since the ordering of each class' properties is perfectly fine. A workaround would of course be composition + delegation, but since we don't have mixins, it's too un-DRY IMHO, why I seek a better solution if possible.

    Read the article

  • multithreading problem with Nvidia PhysX

    - by xcrypt
    I'm having a multithreading problem with Nvidia PhysX. the SDK requires that you call Simulate() (starts computing new physics positions within a new thread) and FetchResults(waits 'till the physics computations are done). Inbetween Simulate() and FetchResults() you may not 'compute new physics' It is proposed (in a sample) that we create a game loop as such: Logic (you may calculate physics here and other stuff) Render + Simulate() at start of Render call and FetchResults at end of Render() call However, this has given me various little errors that stack up: since you actually render the scene that was computed in the previous iteration in the game loop. I wonder if there's a way around this? I've been trying and trying, but I can't think of a solution...

    Read the article

  • How do I properly use multithreading with Nvidia PhysX?

    - by xcrypt
    I'm having a multithreading problem with Nvidia PhysX. the SDK requires that you call Simulate() (starts computing new physics positions within a new thread) and FetchResults() (waits 'till the physics computations are done). Inbetween Simulate() and FetchResults() you may not "compute new physics". It is proposed (in a sample) that we create a game loop as such: Logic (you may calculate physics here and other stuff) Render + Simulate() at start of Render call and FetchResults at end of Render() call However, this has given me various little errors that stack up: since you actually render the scene that was computed in the previous iteration in the game loop. Does anyone have a solution to this?

    Read the article

  • How to re-render a page from a Google Chrome extension?

    - by Dexter
    I'm new to writing extensions for Google Chrome. I want to make an extension that only runs on a few pages (that I'll choose) and re-renders their CSS after the page has loaded (ideally I would like something similar to what you can do with GM_addStyle in greasemonkey scripts). How can I accomplish this in a Chrome extension?

    Read the article

  • Ruby ROXML - how to get an array to render its xml?

    - by Brian D.
    I'm trying to create a Messages object that inherits Array. The messages will gather a group of Message objects. I'm trying to create an xml output with ROXML that looks like this: <messages> <message> <type></type> <code></code> <body></body> </message> ... </messages> However, I can't figure out how to get the message objects in the Messages object to display in the xml. Here is the code I've been working with: require 'roxml' class Message include ROXML xml_accessor :type xml_accessor :code xml_accessor :body end class Messages < Array include ROXML # I think this is the problem - but how do I tell ROXML that # the messages are in this instance of array? xml_accessor :messages, :as => [Message] def add(message) self << message end end message = Message.new message.type = "error" message.code = "1234" message.body = "This is a test message." messages = Messages.new messages.add message puts messages.length puts messages.to_xml This outputs: 1 <messages/> So, the message object I added to messages isn't getting displayed. Anyone have any ideas? Or am I going about this the wrong way? Thanks for any help.

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • How do I increase moving speed of body?

    - by Siddharth
    How to move ball speedily on the screen using box2d in libGDX? package com.badlogic.box2ddemo; import com.badlogic.gdx.ApplicationListener; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.graphics.GL10; import com.badlogic.gdx.graphics.Texture; import com.badlogic.gdx.graphics.g2d.Sprite; import com.badlogic.gdx.graphics.g2d.SpriteBatch; import com.badlogic.gdx.graphics.g2d.TextureRegion; import com.badlogic.gdx.math.Matrix4; import com.badlogic.gdx.math.Vector2; import com.badlogic.gdx.physics.box2d.Body; import com.badlogic.gdx.physics.box2d.BodyDef; import com.badlogic.gdx.physics.box2d.BodyDef.BodyType; import com.badlogic.gdx.physics.box2d.Box2DDebugRenderer; import com.badlogic.gdx.physics.box2d.CircleShape; import com.badlogic.gdx.physics.box2d.Fixture; import com.badlogic.gdx.physics.box2d.FixtureDef; import com.badlogic.gdx.physics.box2d.PolygonShape; import com.badlogic.gdx.physics.box2d.World; public class Box2DDemo implements ApplicationListener { private SpriteBatch batch; private TextureRegion texture; private World world; private Body groundDownBody, groundUpBody, groundLeftBody, groundRightBody, ballBody; private BodyDef groundBodyDef1, groundBodyDef2, groundBodyDef3, groundBodyDef4, ballBodyDef; private PolygonShape groundDownPoly, groundUpPoly, groundLeftPoly, groundRightPoly; private CircleShape ballPoly; private Sprite sprite; private FixtureDef fixtureDef; private Vector2 ballPosition; private Box2DDebugRenderer renderer; Vector2 vector2; @Override public void create() { texture = new TextureRegion(new Texture( Gdx.files.internal("img/red_ring.png"))); sprite = new Sprite(texture); sprite.setOrigin(sprite.getWidth() / 2, sprite.getHeight() / 2); batch = new SpriteBatch(); world = new World(new Vector2(0.0f, 0.0f), false); groundBodyDef1 = new BodyDef(); groundBodyDef1.type = BodyType.StaticBody; groundBodyDef1.position.x = 0.0f; groundBodyDef1.position.y = 0.0f; groundDownBody = world.createBody(groundBodyDef1); groundBodyDef2 = new BodyDef(); groundBodyDef2.type = BodyType.StaticBody; groundBodyDef2.position.x = 0f; groundBodyDef2.position.y = Gdx.graphics.getHeight(); groundUpBody = world.createBody(groundBodyDef2); groundBodyDef3 = new BodyDef(); groundBodyDef3.type = BodyType.StaticBody; groundBodyDef3.position.x = 0f; groundBodyDef3.position.y = 0f; groundLeftBody = world.createBody(groundBodyDef3); groundBodyDef4 = new BodyDef(); groundBodyDef4.type = BodyType.StaticBody; groundBodyDef4.position.x = Gdx.graphics.getWidth(); groundBodyDef4.position.y = 0f; groundRightBody = world.createBody(groundBodyDef4); groundDownPoly = new PolygonShape(); groundDownPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.density = 0f; fixtureDef.restitution = 1f; fixtureDef.friction = 0f; fixtureDef.shape = groundDownPoly; fixtureDef.filter.groupIndex = 0; groundDownBody.createFixture(fixtureDef); groundUpPoly = new PolygonShape(); groundUpPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundUpPoly; fixtureDef.filter.groupIndex = 0; groundUpBody.createFixture(fixtureDef); groundLeftPoly = new PolygonShape(); groundLeftPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundLeftPoly; fixtureDef.filter.groupIndex = 0; groundLeftBody.createFixture(fixtureDef); groundRightPoly = new PolygonShape(); groundRightPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundRightPoly; fixtureDef.filter.groupIndex = 0; groundRightBody.createFixture(fixtureDef); ballPoly = new CircleShape(); ballPoly.setRadius(16f); fixtureDef = new FixtureDef(); fixtureDef.shape = ballPoly; fixtureDef.density = 1f; fixtureDef.friction = 1f; fixtureDef.restitution = 1f; ballBodyDef = new BodyDef(); ballBodyDef.type = BodyType.DynamicBody; ballBodyDef.position.x = (int) 200; ballBodyDef.position.y = (int) 200; ballBody = world.createBody(ballBodyDef); ballBody.setLinearVelocity(200f, 200f); // ballBody.applyLinearImpulse(new Vector2(250f, 250f), // ballBody.getLocalCenter()); ballBody.createFixture(fixtureDef); renderer = new Box2DDebugRenderer(true, false, false); } @Override public void dispose() { ballPoly.dispose(); groundLeftPoly.dispose(); groundUpPoly.dispose(); groundDownPoly.dispose(); groundRightPoly.dispose(); world.destroyBody(ballBody); world.dispose(); } @Override public void pause() { } @Override public void render() { world.step(1f/30f, 3, 3); Gdx.gl.glClearColor(1f, 1f, 1f, 1f); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); batch.begin(); vector2 = ballBody.getLinearVelocity(); System.out.println("X=" + vector2.x + " Y=" + vector2.y); ballPosition = ballBody.getPosition(); renderer.render(world,batch.getProjectionMatrix()); // int preX = (int) (vector2.x / Math.abs(vector2.x)); // int preY = (int) (vector2.y / Math.abs(vector2.y)); // // if (Math.abs(vector2.x) == 0.0f) // ballBody1.setLinearVelocity(1.4142137f, vector2.y); // else if (Math.abs(vector2.x) < 1.4142137f) // ballBody1.setLinearVelocity(preX * 5, vector2.y); // // if (Math.abs(vector2.y) == 0.0f) // ballBody1.setLinearVelocity(vector2.x, 1.4142137f); // else if (Math.abs(vector2.y) < 1.4142137f) // ballBody1.setLinearVelocity(vector2.x, preY * 5); batch.draw(sprite, (ballPosition.x - (texture.getRegionWidth() / 2)), (ballPosition.y - (texture.getRegionHeight() / 2))); batch.end(); } @Override public void resize(int arg0, int arg1) { } @Override public void resume() { } } I implement above code but I can not achieve higher moving speed of the ball

    Read the article

  • DX9 Deferred Rendering, GBuffer displays as clear color only

    - by Fire31
    I'm trying to implement Catalin Zima's Deferred Renderer in a very lightweight c++ DirectX 9 app (only renders a skydome and a model), at this moment I'm trying to render the gbuffer, but I'm having a problem, the screen shows only the clear color, no matter how much I move the camera around. However, removing all the render target operations lets the app render the scene normally, even if the models are being applied the renderGBuffer effect. Any ideas of what I'm doing wrong?

    Read the article

  • XSLT string with HTML entities - How can I get it to render as HTML?

    - by Kache4
    I'm completely new to using XSL, so if there's any information that I'm neglecting to include, just let me know. I have a string in my XSLT file that I can display like this: <xsl:value-of select="@Description/> and it shows up, rendered in a browser like: <div>My text has html entities within it</div> <div>This includes quotes, like &quot;Hello World&quot; and sometimes whitespaces.&nbsp;</div> What can I do to get this string rendered as html, so that <div></div> results in newlines, &quot; gives me ", and &nbsp gives me a space? I could elaborate on things I've already tried that haven't worked, but I don't know if that's relevant.

    Read the article

  • Does margin-left:2px; render faster than margin:0 0 0 2px;?

    - by Christopher Altman
    Douglas Crockford describes the consequence of Javascript inquiring a node's style. How simply asking for the margin of a div causes the browser to 'reflow' the div in the browser's rendering engine four times. So that made me wonder, during the initial rendering of a page (or in Crockford's jargon a "web scroll") is it faster to write CSS that defines only the non-zero/non-default values? To provide an example: div{ margin-left:2px; } Than div{ margin:0 0 0 2px; } I know consequence of this 'savings' is insignificant, but I think it is still important to understand how the technologies are implemented. Also, this is not a question about formatting CSS--this is a question about the implementations of browsers rendering CSS. Reference: http://developer.yahoo.com/yui/theater/video.php?v=crockonjs-4

    Read the article

  • The backbone router isn't working properly

    - by user2473588
    I'm building a simple backbone app that have 4 routes: home, about, privacy and terms. But after setting the routes I have 3 problems: The "terms" view isn't rendering; When I refresh the #about or the #privacy page, the home view renders after the #about/#privacy view When I hit the back button the home view never renders. For example, if I'm in the #about page, and I hit the back button to the homepage, the about view stays in the page I don't know what I'm doing wrong about the 1st problem. I think that the 2nd and 3rd problem are related with something missing in the home router, but I don't know what is. Here is my code: HTML <section class="feed"> <script id="homeTemplate" type="text/template"> <div class="home"> </div> </script> <script id="termsTemplate" type="text/template"> <div class="terms"> Bla bla bla bla </div> </script> <script id="privacyTemplate" type="text/template"> <div class="privacy"> Bla bla bla bla </div> </script> <script id="aboutTemplate" type="text/template"> <div class="about"> Bla bla bla bla </div> </script> </section> The views app.HomeListView = Backbone.View.extend({ el: '.feed', initialize: function ( initialbooks ) { this.collection = new app.BookList (initialbooks); this.render(); }, render: function() { this.collection.each(function( item ){ this.renderHome( item ); }, this); }, renderHome: function ( item ) { var bookview = new app.BookView ({ model: item }) this.$el.append( bookview.render().el ); } }); app.BookView = Backbone.View.extend ({ tagName: 'div', className: 'home', template: _.template( $( '#homeTemplate' ).html()), render: function() { this.$el.html(this.template(this.model.toJSON())); return this; } }); app.AboutView = Backbone.View.extend({ tagName: 'div', className: 'about', initialize:function () { this.render(); }, template: _.template( $( '#aboutTemplate' ).html()), render: function () { this.$el.html(this.template()); return this; } }); app.PrivacyView = Backbone.View.extend ({ tagName: 'div', className: 'privacy', initialize: function() { this.render(); }, template: _.template( $('#privacyTemplate').html() ), render: function () { this.$el.html(this.template()); return this; } }); app.TermsView = Backbone.View.extend ({ tagName: 'div', className: 'terms', initialize: function () { this.render(); }, template: _.template ( $( '#termsTemplate' ).html() ), render: function () { this.$el.html(this.template()), return this; } }); And the router: var AppRouter = Backbone.Router.extend({ routes: { '' : 'home', 'about' : 'about', 'privacy' : 'privacy', 'terms' : 'terms' }, home: function () { if (!this.homeListView) { this.homeListView = new app.HomeListView(); }; }, about: function () { if (!this.aboutView) { this.aboutView = new app.AboutView(); }; $('.feed').html(this.aboutView.el); }, privacy: function () { if (!this.privacyView) { this.privacyView = new app.PrivacyView(); }; $('.feed').html(this.privacyView.el); }, terms: function () { if (!this.termsView) { this.termsView = new app.TermsView(); }; $('.feed').html(this.termsView.el); } }) app.Router = new AppRouter(); Backbone.history.start(); I'm missing something but I don't know what. Thanks

    Read the article

  • Is it possible to render PDF (fPDF) via a javascript?

    - by J. LaRosee
    So, I'm passing some values via jQuery to the server, which generates PDF garble. It goes something like this: $.post('/admin/printBatch', data, // Some vars and such function(data){ if(data) { var batch = window.open('','Batch Print','width=600,height=600,location=_newtab'); var html = data; // Perhaps some header info here?! batch.document.open(); batch.document.write(html); batch.document.close(); $( this ).dialog( "close" ); // jQuery UI } else { alert("Something went wrong, dawg."); } return false; }); The output file looks roughly like so: $pdf->AddPage(null, null, 'A PDF Page'); //.... $pdf->Output('', 'I'); // 'I' sends the file inline to the browser (http://fpdf.org/en/doc/output.htm) What gets rendered to the browser window: %PDF-1.3 3 0 obj <> endobj 4 0 obj <> stream ... I'm missing something major, I just know it... thoughts? Thanks, guys.

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • How do I design a game framework for fast reaction to user input?

    - by Miro
    I've played some games at cca 30 fps and some of them had low reaction time - cca 0.1sec. I hadn't knew why. Now when I'm designing my framework for crossplatform game, I know why. Probably they've been preparing new frame during rendering the previous. RENDER 1 | RENDER 2 | RENDER 3 | RENDER 4 PREPARE 2 | PREPARE 3 | PREPARE 4 | PREPARE 5 I see first frame when second frame is being rendered and third frame being prepared. If I react in that time to 1st frame it will result in forth frame. So it takes 3/FPS seconds to appear results. In 30 fps it would be 100ms, what is quite bad. So i'm wondering what should I design my framework to response to user interaction quickly?

    Read the article

  • How to render an HTML attribute from a Razor view.

    - by ProfK
    I would like to use the same partial view for create, edit, and details views, to avoid duplicating the fieldset structure for an entity. Then, depending on which view renders the partial view, I would like to add a class of "read-only" to a div surrounding my fieldset and handle making the actual input fields read-only on the client, using css or jQuery, or whatever. How can I specify from my Razor view that I need this class added to the "item-details" div? <div class="item-details"> <fieldset> <legend>Product Details</legend> @Html.HiddenFor(model => model.DetailItem.ProductId) <div class="editor-label"> @Html.LabelFor(model => model.DetailItem.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.DetailItem.Name) @Html.ValidationMessageFor(model => model.DetailItem.Name) </div> <p> <input type="submit" value="Save" /> </p> </fieldset> </div>

    Read the article

  • Does style="color: #FFF;" render as #F0F0F0 or #FFFFFF?

    - by Dolph Mathews
    When defining colors using "shorthand hexidecimal" (style="color: #FFF;"), is there a defined method for expanding the shorthand? (style="color: #F0F0F0;" or style="color: #FFFFFF;") Do all browsers use the same expansion method? Is this behavior by specification (if so, where is it defined)? Does the expansion method perhaps vary between CSS 1/2/3? I've observed that "most browsers" expand to #FFFFFF. Are there any other places where this shorthand notation is allowed, but the expansion method is different? I've always avoided using shorthand hex, because I've never known the answers to these questions...

    Read the article

  • Is there an optimal way to render images in cocoa? Im using setNeedsDisplay

    - by Edward An
    Currently, any time I manually move a UIImage (via handling the touchesMoved event) the last thing I call in that event is [self setNeedsDisplay], which effectively redraws the entire view. My images are also being animated, so every time a frame of animation changes, i have to call setNeedsDisplay. I find this to be horrific since I don't expect iphone/cocoa to be able to perform such frequent screen redraws very quickly. Is there an optimal, more efficient way that I could be doing this? Perhaps somehow telling cocoa to update only a particular region of the screen (the rect region of the image)?

    Read the article

  • Ruby on Rails: how to render a string as HTML?

    - by Tim
    I have @str = "<b>Hi</b>" and in my erb view: <%= @str > What will display on the page is: <b>Hi</b> when what I really want is Hi. What's the ruby way to "interpret" a string as HTML markup? Edit: the case where @str = "<span class=\"classname\">hello</span>" If in my view I do <%raw @str %> The HTML source code is <span class=\"classname\">hello</span where what I really want is <span class="classname">hello</span> (without the backslashes that were escaping the double quotes). What's the best way to "unescape" those double quotes?

    Read the article

  • How hide some nodes in Richfaces Tree (do not render nodes by condition)?

    - by VestniK
    I have a tree of categories and courses in my SEAM application. Courses may be active and inactive. I want to be able to show only active or all courses in my tree. I've decided to always build complete tree in my PAGE scope component since building this tree is quite expensive operation. I have boolean flag courseActive in the data wrapped by TreeNode<T>. Now I can't find the way to show courses node only if this flag is true. The best result I've achieved with the following code: <h:outputLabel for="showInactiveCheckbox" value="show all courses: "/> <h:selectBooleanCheckbox id="showInactiveCheckbox" value="#{categoryTreeEditorModel.showAllCoursesInTree}"> <a4j:support event="onchange" reRender="categoryTree"/> </h:selectBooleanCheckbox> <rich:tree id="categoryTree" value="#{categoryTree}" var="item" switchType="ajax" ajaxSubmitSelection="true" reRender="categoryTree,controls" adviseNodeOpened="#{categoryTreeActions.adviseRootOpened}" nodeSelectListener="#{categoryTreeActions.processSelection}" nodeFace="#{item.typeName}"> <rich:treeNode type="Category" icon="..." iconLeaf="..."> <h:outputText value="#{item.title}"/> </rich:treeNode> <rich:treeNode type="Course" icon="..." iconLeaf="..." rendered="#{item.courseActive or categoryTreeEditorModel.showAllCoursesInTree}"> <h:outputText rendered="#{item.courseActive}" value="#{item.title}"/> <h:outputText rendered="#{not item.courseActive}" value="#{item.title}" style="color:#{a4jSkin.inactiveTextColor}"/> </rich:treeNode> </rich:tree> the only problem is if some node is not listed in any rich:treeNode it just still shown with title obtained by Object.toString() method insted of being hidden. Does anybody know how to not show some nodes in the Richfases tree according to some condition?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >