Search Results

Search found 6772 results on 271 pages for 'rob effect'.

Page 18/271 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Two network adapters on Ubuntu Server 9.10 - Can't have both working at once?

    - by Rob
    I'm trying to set up two network adapters in Ubuntu (server edition) 9.10. One for the public internet, the other a private LAN. During the install, I was asked to pick a primary network adapter (eth0 or eth1). I chose eth0, gave the installer the details listed below in the contents of /etc/network/interfaces, and carried on. I've been using this adapter with these setting for the last few days, and every thing's been fine. Today, I decide it's time to set up the local adapter. I edit the /etc/network/interfaces to add the details for eth1 (see below), and restart networking with sudo /etc/init.d/networking restart. After this, attempting to ping the machine using it's external IP address fails, but I can ping it's local IP address. If I bring eth1 down using sudo ifdown eth1, I can successfully ping the machine via it's external IP address again (but obviously not it's internal IP address). Bringing eth1 back up returns us to the original problem state: external IP not working, internal IP working. Here's my /etc/network/interfaces (I've removed the external IP information, but these settings are unchanged from when it worked) rob@rhea:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary (public) network interface auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # The secondary (private) network interface auto eth1 iface eth1 inet static address 192.168.99.4 netmask 255.255.255.0 network 192.168.99.0 broadcast 192.168.99.255 gateway 192.168.99.254 I then do this: rob@rhea:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... [ OK ] rob@rhea:~$ sudo ifup eth0 ifup: interface eth0 already configured rob@rhea:~$ sudo ifup eth1 ifup: interface eth1 already configured Then, from another machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for [external ip]: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), Back on the Ubuntu server in question: rob@rhea:~$ sudo ifdown eth1 ... and again on the other machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Ping statistics for [external ip]: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms So... what am I doing wrong?

    Read the article

  • How do I make my rain effect look more like rain and less like snowfall?

    - by Nikhil Lamba
    I am making a game in that game I want a rain effect. I am little bit far from this right now. I am creating the rain effect like below: particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But it looks like snowfall! What changes can I do for it to look more like rain? EDIT Here is a screenshot:

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • How do display a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • How do produce a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • How to auto apply drag and drop effect to dynamically added element ?

    - by Relax
    I use jquery ui to apply a drag and drop effect on a serial of DIVs, for example: <div class="draggable">...</div> <div class="draggable">...</div> <div class="draggable">...</div> <div class="draggable"> this DIV was dynamically added, not draggable </div> The problem is dynamically added DIVs won't have this effect applied, how can i apply this effect on new members too?

    Read the article

  • Where is rebol fill-pen documented (to get glow effect on a round rectangle) ?

    - by Rebol Tutorial
    There is some discussion here about fill-pen http://www.mail-archive.com/[email protected]/msg02019.html But I can't see documentation about cubic, diamond, etc... effect for fill-pen in rebol's official doc ? I'm trying to draw some round rectangle with glowing effect but don't really understand the parameters I'm playing with so I can't get exactly what I'd like (I'd like the glow effect starting from the center not from the dark left top corner): view layout [ box 278x185 effect [ ; default box face size is 100x100 draw [ anti-alias on ; information for the next draw element (not required) line-width 2.5 ; number of pixels in width of the border pen black ; color of the edge of the next draw element ; fill pen is a little complex: ;fill-pen 10x10 0 90 0 1 1 0.0.0 255.0.0 255.0.255 fill-pen radial 20x20 5 55 5 5 10 0.0.0 55.0.5 55.0.5 ; the draw element box ; another box drawn as an effect 15 ; size of rounding in pixels 0x0 ; upper left corner 278x170 ; lower right corner ] ] ]

    Read the article

  • Will a Wubi install of Ubuntu effect my Windows installation in anyway? [closed]

    - by Oddysee
    Possible Duplicate: What are the benefits of a disk install vs. Wubi? And can I migrate my settings easily? Having never tried a Linux OS before, I want to dabble and take a look at one. Ubuntu seems like a good distro to go with (due to how popular it is) and I want to use Wubi to try it out. I just wanted to know if anything I do while trying out Ubuntu will effect my Windows Installation or the files pertaining to it in anyway? If so, is it potentially Windows breaking?

    Read the article

  • How do I implement a "sliding out of / into" effect on a settings menu similar to that in Angry Birds?

    - by VictorB
    I'm trying to implement a settings menu component similar to that in Angry Birds - a button control that makes an options menu slide out of it and back into it when clicked on. I use scene2d.ui to build the UI components: a Button in a Table to implement the button control, a Table to implement the options menu, and a Stack to lay these out one on top of the other and at this moment I have the following behavior: When the user hits the button control for the first time, then the alpha of the table component is set to 1; When the user hits the button control the second time, then the alpha of the table component is set to 0; And so on. Any ideas how I can get the sliding out of and into effect on user clicks with libgdx? Similar to what Angry Birds provides. Maybe using the TweenEngine, actions, interpolations, combinations of these? Thanks in advance.

    Read the article

  • What is the effect of creating unit tests during development on time to develop as well as time spent in maintenance activities?

    - by jgauffin
    I'm a consultant and I am going to introduce unit tests to all developers at my client site. My goal is to ensure that all new applications should have unit tests for all classes created. The client has a problem with high maintenance costs from fixing bugs in their existing applications. Their applications have a life span from between 5-15 years in which they continuously add new features. I'm quite confident that they will benefit greatly from starting with unit tests. I'm interested in the effect of unit tests on the time and cost of development: How much time will writing unit tests as part of the development process add? How much time will be saved in maintenance activities (testing and debugging) by having good unit tests?

    Read the article

  • Does using a country domain (TLD) negatively effect a site's search ranking if the site content isn't country specific?

    - by Alexizamerican
    I have a website in which the content is not country specific. The TLD is currently .it but the company is hosted and based in the United States. I'm wondering if a .it domain will negatively effect the site's search rankings. Is it better to use a .com TLD? For example (I don't actually own these domains), would the domain love.it have a worse ranking than loveit.com? (Assuming the content of the sites is the same) I've been searching everywhere for an answer with not much luck. Any advice would be very helpful.

    Read the article

  • Why is a fully transparent pixel still rendered?

    - by Mr Bell
    I am trying to make a pixel shader that achieves an effect similar to this video http://www.youtube.com/watch?v=f1uZvurrhig&feature=related My basic idea is render the scene to a temp render target then Render the previously rendered image with a slight fade on to another temp render target Draw the current scene on top of that Draw the results on to a render target that persists between draws Draw the results on to the screen But I am having problems with the fading portion. If I have my pixel shader return a color with its A component set to 0, shouldn't that basically amount to drawing nothing? (Assuming that sprite batch blend mode is set to AlphaBlend) To test this I have my pixel shader return a transparent red color. Instead of nothing being drawn, it draws a partially transparent red box. I hope that my question makes sense, but if it doesnt please ask me to clarify Here is the drawing code public override void Draw(GameTime gameTime) { GraphicsDevice.SamplerStates[1] = SamplerState.PointWrap; drawImageOnClearedRenderTarget(presentationTarget, tempRenderTarget, fadeEffect); drawImageOnRenderTarget(sceneRenderTarget, tempRenderTarget); drawImageOnClearedRenderTarget(tempRenderTarget, presentationTarget); GraphicsDevice.SetRenderTarget(null); drawImage(backgroundTexture); drawImage(presentationTarget); base.Draw(gameTime); } private void drawImage(Texture2D image, Effect effect = null) { spriteBatch.Begin(0, BlendState.AlphaBlend, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(image, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); } private void drawImageOnRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); drawImage(image, effect); } private void drawImageOnClearedRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); GraphicsDevice.Clear(Color.Transparent); drawImage(image, effect); } Here is the fade pixel shader sampler TextureSampler : register(s0); float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0 { float4 c = 0; c = tex2D(TextureSampler, texCoord); //c.a = clamp(c.a - 0.05, 0, 1); c.r = 1; c.g = 0; c.b = 0; c.a = 0; return c; } technique Fade { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • How to make unit selection circles merge?

    - by MaT
    I would like to know how to make this effect of merged circle selection. Here are images to illustrate: Basically I'm looking for this effect: How the merge effect of the circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. If possible, I'm looking for purely shaders solution.

    Read the article

  • Windows 7 search not finding files

    - by Rob Nicholson
    Can anyone please explain this quirk in Windows 7 search (not a big fan of it - preferred XP method or least both). With Outlook, you sometimes have to find and delete your OST file. It resides in the user's profile folder. How come searching the entire C: drive for *.ost files works - they are in c:\Users\rob.nicholson\appdata somewhere but starting the search from c:\Users\rob.nicholson fails to find the files??? Cheers, Rob.

    Read the article

  • Why i can not load a simple pixel shader effect (. fx) file in xna?

    - by Mehdi Bugnard
    I just want to load a simple *.fx file into my project to make a (pixel shader) effect. But whenever I try to compile my project, I get the following error in visual studio Error List: Errors compiling .. ID3DXEffectCompiler: There were no techniques ID3DXEffectCompiler: Compilation failed I already searched on google and found many people with the same problem. And I realized that it was a problem of encoding. With the return lines unrecognized '\ n' . I tried to copy and paste to notepad and save as with ASCII or UTF8 encoding. But the result is always the same. Do you have an idea please ? Thanks a looot :-) Here is my [.fx] file : sampler BaseTexture : register(s0); sampler MaskTexture : register(s1) { addressU = Clamp; addressV = Clamp; }; //All of these variables are pixel values //Feel free to replace with float2 variables float MaskLocationX; float MaskLocationY; float MaskWidth; float MaskHeight; float BaseTextureLocationX; //This is where your texture is to be drawn float BaseTextureLocationY; //texCoord is different, it is the current pixel float BaseTextureWidth; float BaseTextureHeight; float4 main(float2 texCoord : TEXCOORD0) : COLOR0 { //We need to calculate where in terms of percentage to sample from the MaskTexture float maskPixelX = texCoord.x * BaseTextureWidth + BaseTextureLocationX; float maskPixelY = texCoord.y * BaseTextureHeight + BaseTextureLocationY; float2 maskCoord = float2((maskPixelX - MaskLocationX) / MaskWidth, (maskPixelY - MaskLocationY) / MaskHeight); float4 bitMask = tex2D(MaskTexture, maskCoord); float4 tex = tex2D(BaseTexture, texCoord); //It is a good idea to avoid conditional statements in a pixel shader if you can use math instead. return tex * (bitMask.a); //Alternate calculation to invert the mask, you could make this a parameter too if you wanted //return tex * (1.0 - bitMask.a); }

    Read the article

  • Image 1 becomes image 2 with sliding effect from left to right?

    - by Paul
    I would like to show a second image appearing while a "door" is closing on my character. I've got my character in the middle of the screen and a door coming from the left. When the door passes my character, I would like to have this second image appearing little by little. So far, I've gotten by with fadingOut the character and then fadingIn my second image of the character at the same position when the door is completely closed, but I would like to have both of them at the same time. (the effect that image 1 becomes image 2 when the door is sliding from left to right). Would you know how to do this with Cocos2d? Here are the images : at first, the character is blue, and the door is coming from the left : Then, behind the black door, the character becomes red, but only behind this door, so it stays blue when the door is not on him, and will become completely red when the door passes the character : EDIT : with this code, the black door hides the red and blue rectangles : (And if i add each of my layers at a different depth, and only use GL_LESS, same thing) blue.position = ccp( size.width*0.5 , size.height/2 ); red.position = ccp( size.width*0.46 , size.height/2 ); black.position = ccp( size.width*0.1 , size.height/2 ); glEnable(GL_DEPTH_TEST); [batch addChild:red z:0]; [batch addChild:black z:2]; glDepthFunc(GL_GREATER); [batch addChild:blue z:1]; glDepthFunc(GL_LESS); id action1 = [CCMoveTo actionWithDuration:3 position:ccp(size.width,size.height/2)]; [black runAction: [CCSequence actions:action1, nil]];

    Read the article

  • Does anybody know of any resources to achieve this particular "2.5D" isometric engine effect?

    - by Craig Whitley
    I understand this is a little vague, but I was hoping somebody might be able to describe a high-level workflow or link to a resource to be able to achieve a specific isometric "2.5D" tile engine effect. I fell in love with http://www.youtube.com/watch?v=-Q6ISVaM5Ww this engine. Especially with the lighting and the shaders! He has a brief description of how he achieved what he did, but I could really use a brief flow of where you would start, what you would read up on and learn and the logical order to implement these things. A few specific questions: 1) Is there a heightmap on the ground texture that lets the light reflect brighter on certain parts of it? 2) "..using a special material which calculates the world-space normal vectors of every pixel.." - is this some "magic" special material he has created himself, or can you hazard a guess at what he means? 3) with relation to the above quote - what does he mean by 'world-space normal vectors of every pixel'? 4) I'm guessing I'm being a little bit optimistic when I ask if there's any 'all-in-one' tutorial out there? :)

    Read the article

  • Memory is full with vertex buffer

    - by Christian Frantz
    I'm having a pretty strange problem that I didn't think I'd run into. I was able to store a 50x50 grid in one vertex buffer finally, in hopes of better performance. Before I had each cube have an individual vertex buffer and with 4 50x50 grids, this slowed down my game tremendously. But it still ran. With 4 50x50 grids with my new code, that's only 4 vertex buffers. With the 4 vertex buffers, I get a memory error. When I load the game with 1 grid, it takes forever to load and with my previous version, it started up right away. So I don't know if I'm storing chunks wrong or what but it stumped me -.- for (int x = 0; x < 50; x++) { for (int z = 0; z < 50; z++) { for (int y = 0; y <= map[x, z]; y++) { SetUpVertices(); SetUpIndices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z] - y, z), grass)); } } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionTexture), vertices.Count(), BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); indexBuffer = new IndexBuffer(device, typeof(short), indices.Count(), BufferUsage.WriteOnly); indexBuffer.SetData(indices.ToArray()); Thats how theyre stored. The array I'm reading from is a byte array which defines the coordinates of my map. Now with my old version, I used the same loading from an array so that hasn't changed. The only difference is the one vertex buffer instead of 2500 for a 50x50 grid. cubes is just a normal list that holds all my cubes for the vertex buffer. Another thing that just came to mind would be my draw calls. If I'm setting an effect for each cube in my cube list, that's probably going to take a lot of memory. How can I avoid doing this? I need the foreach method to set my cubes to the right position foreach (Cube block in cube.cubes) { effect.VertexColorEnabled = false; effect.TextureEnabled = true; Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(1f); Matrix translate = Matrix.CreateTranslation(block.cubePosition); effect.World = center * scale * translate; effect.View = cam.view; effect.Projection = cam.proj; effect.FogEnabled = false; effect.FogColor = Color.CornflowerBlue.ToVector3(); effect.FogStart = 1.0f; effect.FogEnd = 50.0f; cube.Draw(effect); noc++; }

    Read the article

  • simulate backspace key with java.awt.Robot

    - by Tyler
    There seems to be an issue simulating the backspace key with java.awt.Robot. This thread seems to confirm this but it does not propose a solution. This works: Robot rob = new Robot(); rob.keyPress(KeyEvent.VK_A); rob.keyRelease(KeyEvent.VK_A); This doesn't: Robot rob = new Robot(); rob.keyPress(KeyEvent.VK_BACK_SPACE); rob.keyRelease(KeyEvent.VK_BACK_SPACE); Any ideas? Thanks!

    Read the article

  • How to apply effects that occur (or change) over time to characters in a game?

    - by Joshua Harris
    So assume that I have a system that applies Effects to Characters like so: public class Character { private Collection<Effect> _effects; public void AddEffect (Effect e) { e.ApplyTo(this); _effects.Add(e); } public void RemoveEffect (Effect e) { e.RemoveFrom(this); _effects.Remove(e); } } public interface Effect { public void ApplyTo (Character character); public void RemoveFrom (Character character); } Example Effect: Armor Buff for 5 seconds. void someFunction() { // Do Stuff ... Timer armorTimer = new Timer(5 seconds); ArmorBuff armorbuff = new ArmorBuff(); character.AddEffect(armorBuff); armorTimer.Start(); // Do more stuff ... } // Some where else in code public void ArmorTimer_Complete() { character.RemoveEffect(armorBuff); } public class ArmorBuff implements Effect { public void applyTo(Character character) { character.changeArmor(20); } public void removeFrom(Character character) { character.changeArmor(-20); } } Ok, so this example would buff the Characters armor for 5 seconds. Easy to get working. But what about effects that change over the duration of the effect being applied. Two examples come to mind: Damage Over Time: 200 damage every second for 3 seconds. I could mimic this by applying an Effect that lasts for 1 second and has a counter set to 3, then when it is removed it could deal 200 damage, clone itself, decrement the counter of the clone, and apply the clone to the character. If it repeats this until the counter is 0, then you got a damage over time ability. I'm not a huge fan of this approach, but it does describe the behavior exactly. Degenerating Speed Boost: Gain a speed boost that degrades over 3 seconds until you return to your normal speed. This is a bit harder. I can basically do the same thing as above except having timers set to some portion of a second, such that they occur fast enough to give the appearance of degenerating smoothly over time (even though they are really just stepping down incrementally). I feel like you could get away with only 12 steps over a second (maybe less, I would have to test it and see), but this doesn't seem very elegant to me. The only other way to implement this effect would be to change the system so that the Character checks the _effects collection for effects that alter any of the properties any time that they are being used. I could handle this in functions like getCurrentSpeed() and getCurrentArmor(), but you can imagine how much of a hassle it would be to have that kind of overhead every time you want to do a calculation with movement speed (which would be every time you move your character). Is there a better way to deal with these kinds of effects or events?

    Read the article

  • How to create a hover effect in a newsletter?

    - by Moak
    I'm creating a newsletter, and i want to have panels that change background-color on mouse over. Seeing as the newsletter wont have a head, I am defining all styles inline. I'm pretty sure most popular mail clients will block JS. So I was wondering if I can define a hover effect in the style attribute. Or is there any other solution to achieve this effect? Peace

    Read the article

  • Collation errors in business

    - by Rob Farley
    At the PASS Summit last month, I did a set (Lightning Talk) about collation, and in particular, the difference between the “English” spoken by people from the US, Australia and the UK. One of the examples I gave was that in the US drivers might stop for gas, whereas in Australia, they just open the window a little. This is what’s known as a paraprosdokian, where you suddenly realise you misunderstood the first part of the sentence, based on what was said in the second. My current favourite is Emo Phillip’s line “I like to play chess with old men in the park, but it can be hard to find thirty-two of them.” Essentially, this a collation error, one that good comedians can get mileage from. Unfortunately, collation is at its worst when we have a computer comparing two things in different collations. They might look the same, and sound the same, but if one of the things is in SQL English, and the other one is in Windows English, the poor database server (with no sense of humour) will get suspicious of developers (who all have senses of humour, obviously), and declare a collation error, worried that it might not realise some nuance of the language. One example is the common scenario of a case-sensitive collation and a case-insensitive one. One may think that “Rob” and “rob” are the same, but the other might not. Clearly one of them is my name, and the other is a verb which means to steal (people called “Nick” have the same problem, of course), but I have no idea whether “Rob” and “rob” should be considered the same or not – it depends on the collation. I told a lie before – collation isn’t at its worst in the computer world, because the computer has the sense to complain about the collation issue. People don’t. People will say something, with their own understanding of what they mean. Other people will listen, and apply their own collation to it. I remember when someone was asking me about a situation which had annoyed me. They asked if I was ‘pissed’, and I said yes. I meant that I was annoyed, but they were asking if I’d been drinking. It took a moment for us to realise the misunderstanding. In business, the problem is escalated. A business user may explain something in a particular way, using terminology that they understand, but using words that mean something else to a technical person. I remember a situation with a checkbox on a form (back in VB6 days from memory). It was used to indicate that something was approved, and indicated whether a particular database field should store True or False – nothing more. However, the client understood it to mean that an entire workflow system would be implemented, with different users have permission to approve items and more. The project manager I’d just taken over from clearly hadn’t appreciated that, and I faced a situation of explaining the misunderstanding to the client. Lots of fun... Collation errors aren’t just a database setting that you can ignore. You need to remember that Americans speak a different type of English to Aussies and Poms, and techies speak a different language to their clients.

    Read the article

  • Why / how does XNA's right-handed coordinate system effect anything if you can specify near/far Z values?

    - by vargonian
    I am told repeatedly that XNA Game Studio uses a right-handed coordinate system, and I understand the difference between a right-handed and left-handed coordinate system. But given that you can use a method like Matrix.CreateOrthographicOffCenter to create your own custom projection matrix, specifying the left, right, top, bottom, zNear and zFar values, when does XNA's coordinate system come into play? For example, I'm told that in a right-handed coordinate system, increasingly negative Z values go "into" the screen. But I can easily create my projection matrix like this: Matrix.CreateOrthographicOffCenter(left, right, bottom, top, 0.1f, 10000f); I've now specified a lower value for the near Z than the far Z, which, as I understand it, means that positive Z now goes into the screen. I can similarly tweak the values of left/right/top/bottom to achieve similar results. If specifying a lower zNear than zFar value doesn't affect the Z direction of the coordinate system, what does it do? And when is the right-handed coordinate system enforced? The reason I ask is that I'm trying to implement a 2.5D camera that supports zooming and rotation, and I've spent two full days encountering one unexpected result after another.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >