Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 316/1022 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • DirectX11 CreateWICTextureFromMemory Using PNG

    - by seethru
    I've currently got textures loading using CreateWICTextureFromFile however I'd like a little more control over it, and I'd like to store images in their byte form in a resource loader. Below is just two sets of test code that return two separate results and I'm looking for any insight into a possible solution. ID3D11ShaderResourceView* srv; std::basic_ifstream<unsigned char> file("image.png", std::ios::binary); file.seekg(0,std::ios::end); int length = file.tellg(); file.seekg(0,std::ios::beg); unsigned char* buffer = new unsigned char[length]; file.read(&buffer[0],length); file.close(); HRESULT hr; hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], sizeof(buffer), nullptr, &srv, NULL); As a return for the above code I get Component not found. std::ifstream file; ID3D11ShaderResourceView* srv; file.open("../Assets/Textures/osg.png", std::ios::binary); file.seekg(0,std::ios::end); int length = file.tellg(); file.seekg(0,std::ios::beg); std::vector<char> buffer(length); file.read(&buffer[0],length); file.close(); HRESULT hr; hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), (const uint8_t*)&buffer[0], sizeof(buffer), nullptr, &srv, NULL); The above code returns that the image format is unknown. I'm clearly doing something wrong here, any help is greatly appreciated. Tried finding anything even similar on stackoverflow, and google to no avail.

    Read the article

  • Hue, saturation, brightness, contrast effect in hlsl

    - by Vibhore Tanwer
    I am new to pixel shader, and I am trying to write a simple brightness, contrast, hue, saturation effect. I have written a shader for it but I doubt that my shader is not providing me correct result, Brightness, contrast, saturation is working fine, problem is with hue. if I apply hue between -1 to 1, it seems to be working fine, but to make things more sharp, I need to apply hue value between -180 and 180, like we can apply hue in Paint.NET. Here is my code. // Amount to shift the Hue, range 0 to 6 float Hue; float Brightness; float Contrast; float Saturation; float Alpha; sampler Samp : register(S0); // Converts the rgb value to hsv, where H's range is -1 to 5 float3 rgb_to_hsv(float3 RGB) { float r = RGB.x; float g = RGB.y; float b = RGB.z; float minChannel = min(r, min(g, b)); float maxChannel = max(r, max(g, b)); float h = 0; float s = 0; float v = maxChannel; float delta = maxChannel - minChannel; if (delta != 0) { s = delta / v; if (r == v) h = (g - b) / delta; else if (g == v) h = 2 + (b - r) / delta; else if (b == v) h = 4 + (r - g) / delta; } return float3(h, s, v); } float3 hsv_to_rgb(float3 HSV) { float3 RGB = HSV.z; float h = HSV.x; float s = HSV.y; float v = HSV.z; float i = floor(h); float f = h - i; float p = (1.0 - s); float q = (1.0 - s * f); float t = (1.0 - s * (1 - f)); if (i == 0) { RGB = float3(1, t, p); } else if (i == 1) { RGB = float3(q, 1, p); } else if (i == 2) { RGB = float3(p, 1, t); } else if (i == 3) { RGB = float3(p, q, 1); } else if (i == 4) { RGB = float3(t, p, 1); } else /* i == -1 */ { RGB = float3(1, p, q); } RGB *= v; return RGB; } float4 mainPS(float2 uv : TEXCOORD) : COLOR { float4 col = tex2D(Samp, uv); float3 hsv = rgb_to_hsv(col.xyz); hsv.x += Hue; // Put the hue back to the -1 to 5 range //if (hsv.x > 5) { hsv.x -= 6.0; } hsv = hsv_to_rgb(hsv); float4 newColor = float4(hsv,col.w); float4 colorWithBrightnessAndContrast = newColor; colorWithBrightnessAndContrast.rgb /= colorWithBrightnessAndContrast.a; colorWithBrightnessAndContrast.rgb = colorWithBrightnessAndContrast.rgb + Brightness; colorWithBrightnessAndContrast.rgb = ((colorWithBrightnessAndContrast.rgb - 0.5f) * max(Contrast + 1.0, 0)) + 0.5f; colorWithBrightnessAndContrast.rgb *= colorWithBrightnessAndContrast.a; float greyscale = dot(colorWithBrightnessAndContrast.rgb, float3(0.3, 0.59, 0.11)); colorWithBrightnessAndContrast.rgb = lerp(greyscale, colorWithBrightnessAndContrast.rgb, col.a * (Saturation + 1.0)); return colorWithBrightnessAndContrast; } technique TransformTexture { pass p0 { PixelShader = compile ps_2_0 mainPS(); } } Please If anyone can help me learning what am I doing wrong or any suggestions? Any help will be of great value. EDIT: Images of the effect at hue 180: On the left hand side, the effect I got with @teodron answer. On the right hand side, The effect Paint.NET gives and I'm trying to reproduce.

    Read the article

  • OpenTK C# Loading Model (3DS?) for display and animation

    - by Randomman159
    For the last few days i have been searching for a model importer that support skeletal movement using OpenTK (C#, OpenGL). I am avoiding writing it myself because firstly i am only just learning OpenGL, but also because i don't see why there's a necessity in recreating the wheel. What do you guys use to import your models? I find it interesting that 90% of OpenTK users will import models for various projects, yet i havn't found a single working importer. Could any of you share your code or point me in the direction of a good loader? 3DS files would be best, but anything that can be exported from an autodesk program would be fine, as long as it uses skeleton animation. Thanks for any help :)

    Read the article

  • Updating resources in SharpDX - why can I not map a dynamic texture?

    - by sebf
    I am trying to map a Texture2D resource in DirectX11 via SharpDX. The resource is declared as a ShaderResource, with Default usage and the 'Write' CPU flag specified. My call however fails with a generic exception from SharpDX: _Parent.Context.MapSubresource(_Resource, 0, SharpDX.Direct3D11.MapMode.Write, SharpDX.Direct3D11.MapFlags.None, out stream); I see from this question that it is supported. The MSDN docs and this other question hint that instead of using Context.MapSubresource() I should be using Texture2D.Map(), however, the DirectX11 Texture2D class does not define Map() (though it does for the DX10 equivalent). If I call the above with MapMode.WriteDiscard, the call succeeds but in this case the previous content of the texture is lost, which is no good when I only want to update a section of it. Has the Map() method been removed in DirectX11 or am I looking in the wrong place? Is the MapSubresource() method unsuitable or am I using it wrong?

    Read the article

  • Cutting objects and applying texture to cut. Unity3d/C#

    - by Timothy Williams
    Basically what I'm trying to do is figure out how to calculate realtime cutting of objects, and apply a texture to the cut. I found some good scripts, but most of them have been abandoned and aren't really fully working yet. Applying textures: http://forum.unity3d.com/threads/75949-Mesh-Real-Cutting?highlight=mesh+real+cutting Cutting: http://forum.unity3d.com/threads/78594-Object-Cutter Another (Free) Cutter (Also, I'm not entirely sure how this one will handle cutting complex meshes): http://forum.unity3d.com/threads/69992-fake-slicer?p=449114&viewfull=1#post449114 My plan as of right now is to combine links 1 & 2 or 1 & 3 programming wise. What I'm asking here for is any advice on how to advance (links to asset store packages, or other codes to show how to accomplish something complex like this.)

    Read the article

  • collsion issues with quadtree [on hold]

    - by QuantumGamer
    So i implemented a Quad tree in Java for my 2D game and everything works fine except for when i run my collision detection algorithm, which checks if a object has hit another object and which side it hit.My problem is 80% of the time the collision algorithm works but sometimes the objects just go through each other. Here is my method: private void checkBulletCollision(ArrayList object) { quad.clear(); // quad is the quadtree object for(int i=0; i < object.size();i++){ if(object.get(i).getId() == ObjectId.Bullet) // inserts the object into quadtree quad.insert((Bullet)object.get(i)); } ArrayList<GameObject> returnObjects = new ArrayList<>(); // Uses Quadtree to determine to calculate how many // other bullets it can collide with for(int i=0; i < object.size(); i++){ returnObjects.clear(); if(object.get(i).getId() == ObjectId.Bullet){ quad.retrieve(returnObjects, object.get(i).getBoundsAll()); for(int k=0; k < returnObjects.size(); k++){ Bullet bullet = (Bullet) returnObjects.get(k); if(getBoundsTop().intersects(bullet.getBoundsBottom())){ vy = speed; bullet.vy = -speed; } if(getBoundsBottom().intersects(bullet.getBoundsTop())){ vy = -speed; bullet.vy = speed; } if(getBoundsLeft().intersects(bullet.getBoundsRight())){ vx =speed; bullet.vx = -speed; } if(getBoundsRight().intersects(bullet.getBoundsLeft())){ vx = -speed; bullet.vx = speed; } } } } } Any help would be appreciated. Thanks in advance.

    Read the article

  • Working for Bluetooth MultiPlayer game in Unity for windows Phone 8

    - by Seven 007
    I created a small racing game and i try to implement bluetooth multiplayer in to the game, By using multiplayer plugins for Android I completed for Android Platform. I don't even find any plugins for windows phone 8. So, I tried to do it on myself. msdn Example: Bluetooth App to App Communication But, that's not much useful. If anyone did the bluetooth multiplayer game for windows phone. Just guide me to achive the target. or else give some brief explanation/ links to do the bluetooth multiplayer games for windows phone 8 that may be helpful. Thanks, Seven 007

    Read the article

  • How do I make an on-screen HUD in libgdx?

    - by Devin Carless
    I'm new to libgdx, and I am finding I am getting stumped by the simplest of things. It seems to want me to do things a specific way, but the documentation won't tell me what that is. I want to make a very simple 2d game in which the player controls a spaceship. The mouse wheel will zoom in and out, and information and controls are displayed on the screen. But I can't seem to make the mouse wheel NOT zoom the UI. I've tried futzing with the projection matrices in between Here's my (current) code: public class PlayStage extends Stage { ... public void draw() { // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the // coordinate system specified by the camera. spriteBatch.setProjectionMatrix(camera.combined); spriteBatch.begin(); aButton.draw(spriteBatch, 1F); playerShip.draw(spriteBatch, 1F); spriteBatch.end(); } } camera.zoom is set by scrolled(int amount). I've tried about a dozen variations on the theme of changing the camera's projection matrix after the button is drawn but before the ship is, but no matter what I do, the same things happen to both the button and the ship. So: What is the usual libgdx way of implementing an on-screen UI that isn't transformed by the camera's projection matrix/zoom?

    Read the article

  • Why is mesh baking causing huge performance spikes?

    - by jellyfication
    A couple of seconds into the gameplay on my Android device, I see huge performance spikes caused by "Mesh.Bake Scaled Mesh PhysX CollisionData" In my game, a whole level is a parent object containing multiple ridigbodies with mesh colliders. Every FixedUpdate(), my parent object rotates around the player. Rotating the world causes mesh scaling. Here is the code that handles world rotation. private void Update() { input.update(); Vector3 currentInput = input.GetDirection(); worldParent.rotation = initialRotation; worldParent.DetachChildren(); worldParent.position = transform.position; world.parent = worldParent; worldParent.Rotate(Vector3.right, currentInput.x * 50f); worldParent.Rotate(Vector3.forward, currentInput.z * 50f); } How can I get rid of mesh scaling ? Mesh.Bake physx seems to take effect after some time, is it possible to disable this function ? The profiler looks like this: Bottom-left panel shows data before spikes, the right after

    Read the article

  • SharpDX: best practice for multiple RenderForms?

    - by Rob Jellinghaus
    I have an XNA app, but I really need to add multiple render windows, which XNA doesn't do. I'm looking at SharpDX (both for multi-window support and for DX11 / Metro / many other reasons). I decided to hack up the SharpDX DX11 MultiCubeTexture sample to see if I could make it work. My changes are pretty trivial. The original sample had: [STAThread] private static void Main() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... I changed this to: struct RenderFormWithActions { internal readonly RenderForm Form; // should just be Action but it's not in System namespace?! internal readonly Action RenderAction; internal readonly Action DisposeAction; internal RenderFormWithActions(RenderForm form, Action renderAction, Action disposeAction) { Form = form; RenderAction = renderAction; DisposeAction = disposeAction; } } [STAThread] private static void Main() { // hackity hack new Thread(new ThreadStart(() = { RenderFormWithActions form1 = CreateRenderForm(); RenderLoop.Run(form1.Form, () = form1.RenderAction(0)); form1.DisposeAction(0); })).Start(); new Thread(new ThreadStart(() = { RenderFormWithActions form2 = CreateRenderForm(); RenderLoop.Run(form2.Form, () = form2.RenderAction(0)); form2.DisposeAction(0); })).Start(); } private static RenderFormWithActions CreateRenderForm() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... Basically, I split out all the Main() code into a separate method which creates a RenderForm and two delegates (a render delegate, and a dispose delegate), and bundles them all together into a struct. I call this method twice, each time from a separate, new thread. Then I just have one RenderLoop on each new thread. I was thinking this wouldn't work because of the [STAThread] declaration -- I thought I would need to create the RenderForm on the main (STA) thread, and run only a single RenderLoop on that thread. Fortunately, it seems I was wrong. This works quite well -- if you drag one of the forms around, it stops rendering while being dragged, but starts again when you drop it; and the other form keeps chugging away. My questions are pretty basic: Is this a reasonable approach, or is there some lurking threading issue that might make trouble? My code simply duplicates all the setup code -- it makes a duplicate SwapChain, Device, Texture2D, vertex buffer, everything. I don't have a problem with this level of duplication -- my app is not intensive enough to suffer resource issues -- but nonetheless, is there a better practice? Is there any good reference for which DirectX structures can safely be shared, and which can't? It appears that RenderLoop.Run calls the render delegate in a tight loop. Is there any standard way to limit the frame rate of RenderLoop.Run, if you don't want a 400FPS app eating 100% of your CPU? Should I just Thread.Sleep(30) in the render delegate? (I asked on the sharpdx.org forums as well, but Alexandre is on vacation for two weeks, and my sister wants me to do a performance with my app at her wedding in three and a half weeks, so I'm mighty incented here! http://robjsoftware.org for details of what I'm building....)

    Read the article

  • Is a text file with names/pixel locations something a graphic artist can/should produce? [on hold]

    - by edA-qa mort-ora-y
    I have an artist working on 2D graphics for a game UI. He has no problem producing a screenshot showing all the bits, but we're having some trouble exporting this all into an easy-to-use format. For example, take the game HUD, which is a bunch of elements laid out around the screen. He exports the individual graphics for each one, but how should he communicate the positioning of each of them? My desire is to have a yaml file (or some other simple markup file) that contains the name of each asset and pixel position of that element. For example: fire_icon: pos: 20, 30 fire_bar: pos: 30, 80 Is producing such files a common task of a graphic artist? Is is reasonable to request them to produce such files as part of their graphic work?

    Read the article

  • Raycasting "fisheye effect" question

    - by mattboy
    Continuing my exploration of raycasting, I am very confused about how the correction of the fisheye effect works. Looking at the screenshot below from the tutorial at permadi.com, the way I understand the cause of the fisheye effect is that the rays that are cast are distances from the player, rather than the distances perpendicular to the screen (or camera plane) which is what really needs to be displayed. The distance perpendicular to the screen then, in my world, should simply be the distance of Y coordinates (Py - Dy) assuming that the player is facing straight upwards. Continuing the tutorial, this is exactly how it seems to be according to the below screenshot. From my point of view, the "distorted distance" below is the same as the distance PD calculated above, and what's labelled the "correct distance" below should be the same as Py - Dy. Yet, this clearly isn't the case according to the tutorial. My question is, WHY is this not the same? How could it not be? What am I understanding and visualizing wrong here?

    Read the article

  • Sampler referencing in HLSL - Sampler parameter must come from a literal expression

    - by user1423893
    The following method works fine when referencing a sampler in HLSL float3 P = lightScreenPos; sampler ShadowSampler = DPFrontShadowSampler; float depth; if (alpha >= 0.5f) { // Reference the correct sampler ShadowSampler = DPFrontShadowSampler; // Front hemisphere 'P0' P.z = P.z + 1.0; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; // Rescale viewport to be [0, 1] (texture coordinate space) P.x = 0.5f * P.x + 0.5f; P.y = -0.5f * P.y + 0.5f; depth = tex2D(ShadowSampler, P.xy).x; depth = 1.0 - depth; } else { // Reference the correct sampler ShadowSampler = DPBackShadowSampler; // Back hemisphere 'P1' P.z = 1.0 - P.z; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; // Rescale viewport to be [0, 1] (texture coordinate space) P.x = 0.5f * P.x + 0.5f; P.y = -0.5f * P.y + 0.5f; depth = tex2D(ShadowSampler, P.xy).x; depth = 1.0 - depth; } // [Standard Depth Calculation] float mydepth = P.z; shadow = depth + Bias.x < mydepth ? 0.0f : 1.0f; If I try and do anything with the sampler reference outside the if statement then I get the following error: Sampler parameter must come from a literal expression This code demonstrates that float3 P = lightScreenPos; sampler ShadowSampler = DPFrontShadowSampler; if (alpha >= 0.5f) { // Reference the correct sampler ShadowSampler = DPFrontShadowSampler; // Front hemisphere 'P0' P.z = P.z + 1.0; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; } else { // Reference the correct sampler ShadowSampler = DPBackShadowSampler; // Back hemisphere 'P1' P.z = 1.0 - P.z; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; } // Rescale viewport to be [0, 1] (texture coordinate space) P.x = 0.5f * P.x + 0.5f; P.y = -0.5f * P.y + 0.5f; // [Standard Depth Calculation] float depth = tex2D(ShadowSampler, P.xy).x; depth = 1.0 - depth; float mydepth = P.z; shadow = depth + Bias.x < mydepth ? 0.0f : 1.0f; How can I reference the sampler in this manner without triggering the error?

    Read the article

  • SFML: Generate a background image

    - by BlackMamba
    I want to generate a background, which is used in the game, on every instance of the game based on certain conditions. To do so, I'm using a sf::RenderTexture and a sf::Texture like this: sf::RenderTexture image; std::vector<sf::Texture> textures; sf::Texture texture; // instantiating the vector of textures and the image not shown here for (int i = 0; i < certainSize; ++i) { if(certainContition) { texture.setTexture("file"); texture.setPosition(pos1, pos2); } else { ... } image.draw(texture); } The point here is that I draw single textures on a sf::RenderTexture, but because textures always are on the graphic cards memory, I can't exceed a certain map size which I have to. I also considered using an sf::Image, but I can't find a way to draw an image (i.e. a texture) to it. The third way I found was using an sf::VertexArray, but this seems to be a bit too low-level for my rather simple purposes. So is there a common way to dynamically generate a background image based on other existing images?

    Read the article

  • Pixel Shader, YUV-RGB Conversion failing

    - by TomTom
    I am tasked with playing back a video hthat comes in in a YUV format as an overlay in a larger game. I am not a specialist in Direct3d, so I am struggling. I managed to get a shader working and am rendering 3 textures (Y, V, U). Sadly I am totally unable to get anything like a decent image. Documentation is also failing me. I am currently loading the different data planes (Y,V,U) in three different textures: m_Textures = new Texture[3]; // Y Plane m_Textures[0] = new Texture(m_Device, w, h, 1, Usage.None, Format.L8, Pool.Managed); // V Plane m_Textures[1] = new Texture(m_Device, w2, h2, 1, Usage.None, Format.L8, Pool.Managed); // U Plane m_Textures[2] = new Texture(m_Device, w2, h2, 1, Usage.None, Format.L8, Pool.Managed); When I am rendering them as R, G and B respectively with the following code: float4 Pixel( float2 texCoord: TEXCOORD0) : COLOR0 { float y = tex2D (ytexture, texCoord); float v = tex2D (vtexture, texCoord); float u = tex2D (utexture, texCoord); //R = Y + 1.140 (V -128) //G = Y - 0.395 (U-128) - 0.581 (V-128) //B = Y + 2.028 (U-128) float r = y; //y + 1.140 * v; float g = v; //y - 0.395 * u - 0.581 * v; float b = u; //y + 2.028 * u; float4 result; result.a = 255; result.r = r; //clamp (r, 0, 255); result.g = g; //clamp (g, 0, 255); result.b = b; //clamp (b, 0, 255); return result; } Then the resulting image is - quite funny. I can see the image, but colors are totally distorted, as it should be. The formula I should apply shows up in the comment of the pixel shader, but when I do it, the resulting image is pretty brutally magenta only. This gets me to the question - when I read out an L8 texture into a float, with float y = tex2D (ytexture, texCoord); what is the range of values? The "origin" values are 1 byte, 0 to 255, and the forum I have assumes this. Naturally I am totally off when the values returned are somehow normalized. My Clamp operation at the end also will fail if for example colors in a pixel shader are normalized 0 to 1. Anyone an idea how that works? Please point me also to documentation - I have not found anything in this regard.

    Read the article

  • Color sprite tint with opacity in MonoGame/XNA

    - by Piotr Walat
    In MonoGame I am using SpriteBatch to draw sprites. I want to create a semi transparent overlay that would 'tint' the sprite with a given color. SpriteBatch.Draw accepts Color parameter that allows to specify the tint, however the alpha channel seems to make the whole sprite transparent (not the tint only). To address the problem i am overlaying my sprites with another white, semitransparent sprite tinted to a given color. It works as expected, but I am not sure if that is the correct (ie. most optimal) approach. Can you suggest better/faster technique?

    Read the article

  • How to fix Monogame WP8 Touch Position bug?

    - by Moses Aprico
    Normally below code will result in X:Infinity, Y:Infinity TouchCollection touchState = TouchPanel.GetState(); foreach (TouchLocation t in touchState) { if (t.State == TouchLocationState.Pressed) { vb.ButtonTouched((int)t.Position.X, (int)t.Position.Y); } } Then, I followed this https://github.com/mono/MonoGame/issues/1046 and added below code at the first line in update method. (I still don't know how it's worked, but it fixed the problem) if (_firstUpdate) { typeof(Microsoft.Xna.Framework.Input.Touch.TouchPanel).GetField("_touchScale",System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static).SetValue(null, Vector2.One); _firstUpdate = false; } And then, when I randomly testing something, there are several area that won't read the user touch. The tile with the purple dude is the area which won't receive user input (It don't even detect "Pressed", the TouchCollection.Count = 0) Any idea how to fix this? UPDATE 1 : The second attempt in recompiling The difference is weird. Dunno why the consistent clickable area is just 2/3 area to the left UPDATE 2 : After trying to rotate to landscape and back to portrait to randomly testing, then the outcome become :

    Read the article

  • OpenGL or OpenGL ES

    - by zxspectrum
    What should I learn? OpenGL 4.1 or OpenGL ES 2.0? I will be developing desktop applications using Qt but I may start developing mobile applications in a few months, too. I don't know anything about 3D, 3D math, etc and I'd rather spend 100 bucks in a good book than 1 week digging websites and going through trial and error. One problem I see with OpenGL 4.1 is as far as I know there is no book yet (the most recent ones are for OpenGL 3.3 or 4.0), while there are books on OpenGL ES 2.0. On the other hand, from my naive point of view, OpenGL 4.1 seems like OpenGL ES 2.0 + additions, so it looks like it would be easier/better to first learn OpenGL ES 2.0, then go for the shader language, etc Please, don't tell me to use NeHe (it's generally agreed it's full of bad/old practices), the Durian tutorial, etc. Thanks

    Read the article

  • Unity: how to apply programmatical changes to the Terrain SplatPrototype?

    - by Shivan Dragon
    I have a script to which I add a Terrain object (I drag and drop the terrain in the public Terrain field). The Terrain is already setup in Unity to have 2 PaintTextures: 1 is a Square (set up with a tile size so that it forms a checkered pattern) and the 2nd one is a grass image: Also the Target Strength of the first PaintTexture is lowered so that the checkered pattern also reveals some of the grass underneath. Now I want, at run time, to change the Tile Size of the first PaintTexture, i.e. have more or less checkers depending on various run time conditions. I've looked through Unity's documentation and I've seen you have the Terrain.terrainData.SplatPrototype array which allows you to change this. Also there's a RefreshPrototypes() method on the terrainData object and a Flush() method on the Terrain object. So I made a script like this: public class AStarTerrain : MonoBehaviour { public int aStarCellColumns, aStarCellRows; public GameObject aStarCellHighlightPrefab; public GameObject aStarPathMarkerPrefab; public GameObject utilityRobotPrefab; public Terrain aStarTerrain; void Start () { //I've also tried NOT drag and dropping the Terrain on the public field //and instead just using the commented line below, but I get the same results //aStarTerrain = this.GetComponents<Terrain>()[0]; Debug.Log ("Got terrain "+aStarTerrain.name); SplatPrototype[] splatPrototypes = aStarTerrain.terrainData.splatPrototypes; Debug.Log("Terrain has "+splatPrototypes.Length+" splat prototypes"); SplatPrototype aStarCellSplat = splatPrototypes[0]; Debug.Log("Re-tyling splat prototype "+aStarCellSplat.texture.name); aStarCellSplat.tileSize = new Vector2(2000,2000); Debug.Log("Tyling is now "+aStarCellSplat.tileSize.x+"/"+aStarCellSplat.tileSize.y); aStarTerrain.terrainData.RefreshPrototypes(); aStarTerrain.Flush(); } //... Problem is, nothing gets changed, the checker map is not re-tiled. The console outputs correctly tell me that I've got the Terrain object with the right name, that it has the right number of splat prototypes and that I'm modifying the tileSize on the SplatPrototype object corresponding to the right texture. It also tells me the value has changed. But nothing gets updated in the actual graphical view. So please, what am I missing?

    Read the article

  • Send regular keyboard samples OR keyboard state changes over network

    - by Ciaran
    Building a multi player asteroids game where ships compete with each other. Using UDP. Wanted to minimize traffic sent to server. Which would you do: Send periodic keyboard state samples every from client every to match server physics update rate e.g. 50 times per second. Highly resilient to packet loss and other reliabilty problems. Out of date packets disacarded by server. Generates a lot of unnuecessary traffic. Only send keyboard state when it changes (key up, key down). Radically less traffic sent from client to server. However, UDP can lose packets without you being informed. So the latter method could result in the vital packet never being resent unless I detect and resend this in a timely manner.

    Read the article

  • SDL2 sprite batching and texture atlases

    - by jms
    I have been programming a 2D game in C++, using the SDL2 graphics API for rendering. My game concept currently features effects that could result in even tens of thousands of sprites being drawn simultaneously to the screen. I'd like to know what can be done for increasing rendering efficiency if the need arises, preferably using the SDL2 API only. I have previously given a quick look at OpenGL-based 2D rendering, and noticed that SDL2 lacks a command like int SDL_RenderCopyMulti(SDL_Renderer* renderer, SDL_Texture* texture, const SDL_Rect* srcrects, SDL_Rect* dstrects, int count) Which would permit SDL to benefit from two common techniques used for efficient 2D graphics: Texture batching: Sorting sprites by the texture used, and then simultaneously rendering as many sprites that use the same texture as possible, changing only the source area on the texture and the destination area on the render target between sprites. This allows the encapsulation of the whole operation in a single GPU command, reducing the overhead drastically from multiple distinct calls. Texture atlases: Instead of creating one texture for each frame of each animation of each sprite, combining multiple animations and even multiple sprites into a single large texture. This lessens the impact of changing the current texture when switching between sprites, as the correct texture is often ready to be used from the previous draw call. Furthemore the GPU is optimized for handling large textures, in contrast to the many tiny textures typically used for sprites. My question: Would SDL2 still get somewhat faster from any rudimentary sprite sorting or from combining multiple images into one texture thanks to automatic video driver optimizations? If I will encounter performance issues related to 2D rendering in the future, will I be forced to switch to OpenGL for lower level control over the GPU? Edit: Are there any plans to include such functionality in the near future?

    Read the article

  • C# graph library to be used from Unity3D

    - by Heisenbug
    I'm looking for a C# graph library to be used inside Unity3D script. I'm not looking for pathfinding libraries (I know there are good one available). I could consider using a path finding library only if it gives me direct access to underlying graph classes (I need nodes and edges, and classic graph algorithms) The only product I've seen that seems intersting is QuickGraph. I have the following question: Is it possible to use QuickGraph inside Unity3d? If yes. Is this a good idea? Does it have any drawbacks? Is it a quite fast and well written/supported library? Does anyone has ever used it? Are available other C# graph library that can be easily integrated in Unity3d?

    Read the article

  • LWJGL - Eclipse error [on hold]

    - by Zarkopafilis
    When I try to run my lwjgl project, an error pops . Here is the log file: # A fatal error has been detected by the Java Runtime Environment: # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6d8fcc0a, pid=5612, tid=900 # JRE version: 6.0_16-b01 Java VM: Java HotSpot(TM) Client VM (14.2-b01 mixed mode windows-x86 ) Problematic frame: V [jvm.dll+0xfcc0a] # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x016b9000): JavaThread "main" [_thread_in_vm, id=900, stack(0x00160000,0x001b0000)] siginfo: ExceptionCode=0xc0000005, reading address 0x00000000 Registers: EAX=0x00000000, EBX=0x00000000, ECX=0x00000006, EDX=0x00000000 ESP=0x001af4d4, EBP=0x001af524, ESI=0x016b9000, EDI=0x016b9110 EIP=0x6d8fcc0a, EFLAGS=0x00010246 Top of Stack: (sp=0x001af4d4) 0x001af4d4: 6da44bd8 016b9110 00000000 001af668 0x001af4e4: ffffffff 22200000 001af620 76ec39c2 0x001af4f4: 001af524 6d801086 0000000b 001afd34 0x001af504: 016b9000 016dd990 016b9000 00000000 0x001af514: 001af5f4 6d9ee000 6d9ef2f0 ffffffff 0x001af524: 001af58c 10008c85 016b9110 00000000 0x001af534: 00000000 000a0554 00000000 00000024 0x001af544: 00000000 00000000 001af6ac 00000000 Instructions: (pc=0x6d8fcc0a) 0x6d8fcbfa: e8 e8 d0 1d 08 00 8b 45 10 c7 45 d8 0b 00 00 00 0x6d8fcc0a: 8b 00 8b 48 08 0f b7 51 26 8b 40 0c 8b 4c 90 20 Stack: [0x00160000,0x001b0000], sp=0x001af4d4, free space=317k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [jvm.dll+0xfcc0a] C [lwjgl.dll+0x8c85] C [USER32.dll+0x18876] C [USER32.dll+0x170f4] C [USER32.dll+0x1119e] C [ntdll.dll+0x460ce] C [USER32.dll+0x10e29] C [USER32.dll+0x10e84] C [lwjgl.dll+0x1cf0] j org.lwjgl.opengl.WindowsDisplay.createWindow(Lorg/lwjgl/opengl/DrawableLWJGL;Lorg/lwjgl/opengl/DisplayMode;Ljava/awt/Canvas;II)V+102 j org.lwjgl.opengl.Display.createWindow()V+71 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;Lorg/lwjgl/opengl/Drawable;Lorg/lwjgl/opengl/ContextAttribs;)V+72 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;)V+12 j org.lwjgl.opengl.Display.create()V+7 j zarkopafilis.koding.io.javafx.Main.main([Ljava/lang/String;)V+16 v ~StubRoutines::call_stub V [jvm.dll+0xecf9c] V [jvm.dll+0x1741e1] V [jvm.dll+0xed01d] V [jvm.dll+0xf5be5] V [jvm.dll+0xfd83d] C [javaw.exe+0x2155] C [javaw.exe+0x833e] C [kernel32.dll+0x51154] C [ntdll.dll+0x5b2b9] C [ntdll.dll+0x5b28c] Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.WindowsDisplay.nCreateWindow(IIIIZZJ)J+0 j org.lwjgl.opengl.WindowsDisplay.createWindow(Lorg/lwjgl/opengl/DrawableLWJGL;Lorg/lwjgl/opengl/DisplayMode;Ljava/awt/Canvas;II)V+102 j org.lwjgl.opengl.Display.createWindow()V+71 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;Lorg/lwjgl/opengl/Drawable;Lorg/lwjgl/opengl/ContextAttribs;)V+72 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;)V+12 j org.lwjgl.opengl.Display.create()V+7 j zarkopafilis.koding.io.javafx.Main.main([Ljava/lang/String;)V+16 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x0179a400 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=4460, stack(0x0b900000,0x0b950000)] 0x01795400 JavaThread "CompilerThread0" daemon [_thread_blocked, id=5264, stack(0x0b8b0000,0x0b900000)] 0x01790c00 JavaThread "Attach Listener" daemon [_thread_blocked, id=6080, stack(0x0b860000,0x0b8b0000)] 0x01786400 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=1204, stack(0x0b810000,0x0b860000)] 0x01759c00 JavaThread "Finalizer" daemon [_thread_blocked, id=5772, stack(0x0b7c0000,0x0b810000)] 0x01755000 JavaThread "Reference Handler" daemon [_thread_blocked, id=4696, stack(0x01640000,0x01690000)] =0x016b9000 JavaThread "main" [_thread_in_vm, id=900, stack(0x00160000,0x001b0000)] Other Threads: 0x01751c00 VMThread [stack: 0x015f0000,0x01640000] [id=4052] 0x0179c800 WatcherThread [stack: 0x0b950000,0x0b9a0000] [id=3340] VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: None Heap def new generation total 960K, used 816K [0x037c0000, 0x038c0000, 0x03ca0000) eden space 896K, 91% used [0x037c0000, 0x0388c2c0, 0x038a0000) from space 64K, 0% used [0x038a0000, 0x038a0000, 0x038b0000) to space 64K, 0% used [0x038b0000, 0x038b0000, 0x038c0000) tenured generation total 4096K, used 0K [0x03ca0000, 0x040a0000, 0x077c0000) the space 4096K, 0% used [0x03ca0000, 0x03ca0000, 0x03ca0200, 0x040a0000) compacting perm gen total 12288K, used 2143K [0x077c0000, 0x083c0000, 0x0b7c0000) the space 12288K, 17% used [0x077c0000, 0x079d7e38, 0x079d8000, 0x083c0000) No shared spaces configured. Dynamic libraries: 0x00400000 - 0x00424000 C:\Program Files\Java\jre6\bin\javaw.exe 0x77550000 - 0x7768e000 C:\Windows\SYSTEM32\ntdll.dll 0x75a80000 - 0x75b54000 C:\Windows\system32\kernel32.dll 0x758d0000 - 0x7591b000 C:\Windows\system32\KERNELBASE.dll 0x759e0000 - 0x75a80000 C:\Windows\system32\ADVAPI32.dll 0x76070000 - 0x7611c000 C:\Windows\system32\msvcrt.dll 0x77250000 - 0x77269000 C:\Windows\SYSTEM32\sechost.dll 0x771a0000 - 0x77241000 C:\Windows\system32\RPCRT4.dll 0x76eb0000 - 0x76f79000 C:\Windows\system32\USER32.dll 0x76e60000 - 0x76eae000 C:\Windows\system32\GDI32.dll 0x77770000 - 0x7777a000 C:\Windows\system32\LPK.dll 0x75fd0000 - 0x7606e000 C:\Windows\system32\USP10.dll 0x770b0000 - 0x770cf000 C:\Windows\system32\IMM32.DLL 0x770d0000 - 0x7719c000 C:\Windows\system32\MSCTF.dll 0x7c340000 - 0x7c396000 C:\Program Files\Java\jre6\bin\msvcr71.dll 0x6d800000 - 0x6da8b000 C:\Program Files\Java\jre6\bin\client\jvm.dll 0x73a00000 - 0x73a32000 C:\Windows\system32\WINMM.dll 0x75610000 - 0x7565b000 C:\Windows\system32\apphelp.dll 0x6d7b0000 - 0x6d7bc000 C:\Program Files\Java\jre6\bin\verify.dll 0x6d330000 - 0x6d34f000 C:\Program Files\Java\jre6\bin\java.dll 0x6d290000 - 0x6d298000 C:\Program Files\Java\jre6\bin\hpi.dll 0x776e0000 - 0x776e5000 C:\Windows\system32\PSAPI.DLL 0x6d7f0000 - 0x6d7ff000 C:\Program Files\Java\jre6\bin\zip.dll 0x10000000 - 0x1004c000 C:\Users\theo\Desktop\workspace\JavaFX1\lib\natives\windows\lwjgl.dll 0x5d170000 - 0x5d238000 C:\Windows\system32\OPENGL32.dll 0x6e7b0000 - 0x6e7d2000 C:\Windows\system32\GLU32.dll 0x70620000 - 0x70707000 C:\Windows\system32\DDRAW.dll 0x70610000 - 0x70616000 C:\Windows\system32\DCIMAN32.dll 0x75b60000 - 0x75cfd000 C:\Windows\system32\SETUPAPI.dll 0x759b0000 - 0x759d7000 C:\Windows\system32\CFGMGR32.dll 0x76d70000 - 0x76dff000 C:\Windows\system32\OLEAUT32.dll 0x75db0000 - 0x75f0c000 C:\Windows\system32\ole32.dll 0x758b0000 - 0x758c2000 C:\Windows\system32\DEVOBJ.dll 0x74060000 - 0x74073000 C:\Windows\system32\dwmapi.dll 0x74b60000 - 0x74b69000 C:\Windows\system32\VERSION.dll 0x745f0000 - 0x7478e000 C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_420fe3fa2b8113bd\COMCTL32.dll 0x75d50000 - 0x75da7000 C:\Windows\system32\SHLWAPI.dll 0x74370000 - 0x743b0000 C:\Windows\system32\uxtheme.dll 0x22200000 - 0x22206000 C:\Program Files\ESET\ESET Smart Security\eplgHooks.dll VM Arguments: jvm_args: -Djava.library.path=C:\Users\theo\Desktop\workspace\JavaFX1\lib\natives\windows -Dfile.encoding=Cp1253 java_command: zarkopafilis.koding.io.javafx.Main Launcher Type: SUN_STANDARD Environment Variables: PATH=C:/Program Files/Java/jre6/bin/client;C:/Program Files/Java/jre6/bin;C:/Program Files/Java/jre6/lib/i386;C:\Perl\site\bin;C:\Perl\bin;C:\Ruby200\bin;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Windows Live\Shared;C:\Users\theo\Desktop\eclipse; USERNAME=theo OS=Windows_NT PROCESSOR_IDENTIFIER=x86 Family 6 Model 37 Stepping 5, GenuineIntel --------------- S Y S T E M --------------- OS: Windows 7 Build 7600 CPU:total 4 (8 cores per cpu, 2 threads per core) family 6 model 37 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, ht Memory: 4k page, physical 2097151k(1257972k free), swap 4194303k(4194303k free) vm_info: Java HotSpot(TM) Client VM (14.2-b01) for windows-x86 JRE (1.6.0_16-b01), built on Jul 31 2009 11:26:58 by "java_re" with MS VC++ 7.1 time: Wed Oct 23 22:00:12 2013 elapsed time: 0 seconds Code: Display.setDisplayMode(new DisplayMode(800,600)); Display.create();//Error here I am using JDK 6

    Read the article

  • Sharing a texture resource from DX11 to DX9 to WPF, need to wait for DeviceContext.Flush() to finish

    - by Rei Miyasaka
    I'm following these instructions on TheCodeProject for rendering from DirectX to WPF using D3DImage. The trouble is that now that I have no swap chain to call Present() on -- which according to the article shouldn't be a problem, but it definitely wasn't copying my back buffer. An additional step that I have to take before I can copy the texture to WPF is to share it with a second D3D9Ex device, since D3DImage only works with DX9 (which is understandable, as WPF is built on DX9). To that end, I've modified some SlimDX code to work with DirectX 11. I tried calling DeviceContext.Flush() (the Immediate one) at the end of each render cycle, which kind of works -- most of the time it'll show my renderings, but maybe for maybe 3 or 4 out of 60 frames each second, it'll draw my clear color instead. This makes sense -- Flush() is non-blocking; it doesn't wait for the GPU to do its thing the way SwapChain.Present does. Any idea what the proper solution is? I have a feeling it has something to do with my texture parameters for the back buffer, but I don't know.

    Read the article

  • How do I blend 2 lightmaps for day/night cycle in Unity?

    - by Timothy Williams
    Before I say anything else: I'm using dual lightmaps, meaning I need to blend both a near and a far. So I've been working on this for a while now, I have a whole day/night cycle set up for renderers and lighting, and everything is working fine and not process intensive. The only problem I'm having is figuring out how I could blend two lightmaps together, I've figured out how to switch lightmaps, but the problem is that looks kind of abrupt and interrupts the experience. I've done hours of research on this, tried all kinds of shaders, pixel by pixel blending, and everything else to no real avail. Pixel by pixel blending in C# turned out to be a bit process intensive for my liking, though I'm still working on cleaning it up and making it run more smoothly. Shaders looked promising, but I couldn't find a shader that could properly blend two lightmaps. Does anyone have any leads on how I could accomplish this? I just need some sort of smooth transition between my daytime and nighttime lightmap. Perhaps I could overlay the two textures and use an alpha channel? Or something like that?

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >