Search Results

Search found 779 results on 32 pages for 'textures'.

Page 17/32 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Improving SpriteBatch performance for tiles

    - by Richard Rast
    I realize this is a variation on what has got to be a common question, but after reading several (good answers) I'm no closer to a solution here. So here's my situation: I'm making a 2D game which has (among some other things) a tiled world, and so, drawing this world implies drawing a jillion tiles each frame (depending on resolution: it's roughly a 64x32 tile with some transparency). Now I want the user to be able to maximize the game (or fullscreen mode, actually, as its a bit more efficient) and instead of scaling textures (bleagh) this will just allow lots and lots of tiles to be shown at once. Which is great! But it turns out this makes upward of 2000 tiles on the screen each time, and this is framerate-limiting (I've commented out enough other parts of the game to make sure this is the bottleneck). It gets worse if I use multiple source rectangles on the same texture (I use a tilesheet; I believe changing textures entirely makes things worse), or if you tint the tiles, or whatever. So, the general question is this: What are some general methods for improving the drawing of thousands of repetitive sprites? Answers pertaining to XNA's SpriteBatch would be helpful but I'm equally happy with general theory. Also, any tricks pertaining to this situation in particular (drawing a tiled world efficiently) are also welcome. I really do want to draw all of them, though, and I need the SpriteMode.BackToFront to be active, because

    Read the article

  • How to make Unity 3D work with Bumblebee using the Intel chipset

    - by EboMike
    I have a Sony VAIO S laptop with the dreaded Optimus and finally managed to get Bumblebee to work fully on Ubuntu 12.04 so that I can utilize both the hardware acceleration of the Intel chipset as well as the Nvidia one via optirun and/or bumble-app-settings. However, the desktop effects don't work. But they should, I vaguely remember that they worked for a while before I had Bumblebee installed. This is what I get with the support test: :~$ /usr/lib/nux/unity_support_test -p Xlib: extension "NV-GLX" missing on display ":0". OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile OpenGL version string: 1.4 (2.1 Mesa 8.0.2) Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: no GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no First of all, I kind of doubt that the chipset doesn't support VBOs (essentially a standard feature in GL). Neither Xorg.0.log nor Xorg.8.log show any particular errors. As for the Nvidia drivers: In order to get them to work, I had to install the 304.22 drivers (older ones wouldn't work). They clobbered libglx.so, so I reinstated the xserver-xorg-core libglx.so in its original place, moved Nvidia's libglx.so to an nvidia-specific folder and specified that folder in the bumblebee.config. That seems to work and shouldn't cause the problem I see here. For fun, I tried to use the Nvidia chipset for Unity, but that didn't fly either: ~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 640M LE/PCIe/SSE2 OpenGL version string: 4.2.0 NVIDIA 304.22 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no

    Read the article

  • Can I set my Optimus Nvidia card to run Unity3D with bumblebee?

    - by manuhalo
    I'd like to know whether I can run compiz on my Nvidia card to speed things up. It's a Dell XPS15 laptop but I'm mostly using it as a desktop, so battery life is not important. Apparently my Intel integrated card is able to run unity 3D, but my Nvidia GT 420M is not. Here's the output of unity_support_test, both with optirun and without it: manuhalo@Ubuntu-XPS-L501X:~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 420M/PCI/SSE2 OpenGL version string: 4.1.0 NVIDIA 280.13 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no manuhalo@Ubuntu-XPS-L501X:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes Any ideas of why this is happening? Thanks in advance to anyone able to shed some light on this. What I have tried: Installed the v290 drivers from the x-stable PPA. Tried forcing Unity-3D to work by telling Unity to ignore the unity-support-test results i.e. gksudo gedit /etc/environment add the following UNITY_FORCE_START=1 to the end of the file.

    Read the article

  • How to create an Orthographic display in OpenGL (ES) that handles different screen sizes and orientations?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • cocos2d/OpenGL multitexturing problem

    - by Gajoo
    I've got a simple shader to test multitextureing the problem is both samplers are using same image as their reference. the shader code is basically just this : vec4 mid = texture2D(u_texture,v_texCoord); float g = texture2D(u_guide,v_guideCoord); gl_FragColor = vec4(g , mid.g,0,1); and this is how I'm calling draw function : int last_State; glGetIntegerv(GL_ACTIVE_TEXTURE, &last_State); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, getTexture()->getName()); glActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, mGuideTexture->getName()); ccGLEnableVertexAttribs( kCCVertexAttribFlag_TexCoords |kCCVertexAttribFlag_Position); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, texCoord); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisable(GL_TEXTURE_2D); I've already check mGuideTexture->getName() and getTexture()->getName() are returning correct textures. but looking at the result I can tell, both samplers are reading from getTexture()->getName(). here are some screen shots showing what is happening : The image rendered Using above codes The image rendered when I change textures passed to samples I'm expecting to see green objects from the first picture with red objects hanging from the top.

    Read the article

  • How can I improve the "smoothness" of a 2D side-scrolling iPhone game?

    - by MrDatabase
    I'm working on a relatively simple 2D side-scrolling iPhone game. The controls are tilt-based. I use OpenGL ES 1.1 for the graphics. The game state is updated at a rate of 30 Hz... And the drawing is updated at a rate of 30 fps (via NSTimer). The smoothness of the drawing is ok... But not quite as smooth as a game like iFighter. What can I do to improve the smoothness of the game? Here are the potential issues I've briefly considered: I'm varying the opacity of up to 15 "small" (20x20 pixels) textures at a time... Apparently varying the opacity in this manner can degrade drawing performance I'm rendering at only 30 fps (via NSTimer)... Perhaps 2D games like iFighter are rendered at a higher frame rate? Perhaps the game state could be updated at a faster rate? Note the acceleration vales are updated at 100 Hz... So I could potentially update part of the game state at 100 hz All of my textures are PNG24... Perhaps PNG8 would help (due to smaller size etc)

    Read the article

  • GPU based procedual terrain borders?

    - by OnePie
    I'm working on a game that preferibly should feature a combination of designed and procedually generated terrain where the designer specifies in somewhat detailed terms what type of terrain a given area will have (grasslands, forest etc...) and then a precedual algorithm takes care of the rest. I'm not talking about minecraft style biomoes, but rather the game map for a strategy game. Each 'area' will not take up that much of the screen, and thus be more akin to a tile whose texture is procedually generated. While procedually generating terrain textures on the GPU are not that difficult, the hard part is making the borders between them look good. Currently, the 'tiles' are large enough to be visible (due to memory constraints mainly, we are talking planetary sized textures for a game taking place in space and on a continental ground view with seamless transitions between them) and creating good borders between them with an algorithm that is fast enough to be useful has proven difficult. Sampling the n-surrounding pixels and using the combiened result did not yield very good borders and was fairly slow on the GPU to boot (ca 12ms for me, that is without any lighning or shading and with very simple terrain texture shaders). So are there any practical known methods to solve this problem?

    Read the article

  • techniques for displaying vehicle damage

    - by norca
    I wonder how I can displaying vehicle damage. I am talking about an good way to show damage on screen. Witch kind of model are common in games and what are the benefits of them. What is state of the art? One way i can imagine is to save a set of textures (normal/color/lightmaps, etc) to a state of the car (normal, damage, burnt out) and switch or blending them. But is this really good without changing the model? Another way i was thinking about is preparing animations for different locations on my car, something like damage on the front, on the leftside/rightside or on the back. And start blending the specific animation. But is this working with good textures? Whats about physik engines? Is it usefull to use it for deforming vertexdata? i think losing parts of my car (doors, sirens, weapons) can looks really nice. my game is a kind of rts in a top down view. vehicles are not the really most importend units (its no racing game), but i have quite a lot in. thx for help

    Read the article

  • How can I create an orthographic display that handles different screen dimensions?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • OpenGL depth texture wrong

    - by CoffeeandCode
    I have been writing a game engine for a while now and have decided to reconstruct my positions from depth... but how I read the depth seems to be wrong :/ What is wrong in my rendering? How I init my depth texture in the FBO gl::BindTexture(gl::TEXTURE_2D, this->textures[0]); // Depth gl::TexImage2D( gl::TEXTURE_2D, 0, gl::DEPTH32F_STENCIL8, width, height, 0, gl::DEPTH_STENCIL, gl::FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr ); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_EDGE); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_EDGE); gl::FramebufferTexture2D( gl::FRAMEBUFFER, gl::DEPTH_STENCIL_ATTACHMENT, gl::TEXTURE_2D, this->textures[0], 0 ); Linear depth readings in my shader Vertex #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment (where the issue probably is) #version 150\n uniform sampler2D depth_texture; in vec2 uv_f; out vec4 Screen; void main(){ float n = 0.00001; float f = 100.0; float z = texture(depth_texture, uv_f).x; float linear_depth = (n * z)/(f - z * (f - n)); Screen = vec4(linear_depth); // It ISN'T because I don't separate alpha } When Rendered so gamedev.stackexchange, what's wrong with my rendering/glsl?

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • Render string to texture in Android and OpenGL ES

    - by Eddie Ringle
    I've googled around everywhere, but cannot find much for rendering strings to textures and then displaying that texture on a quad on the screen. Can someone provide a run-down on the process or provide good resources that describe how? Is rendering strings to textures even the best method for displaying text in an Android OpenGL ES app? EDIT: Okay, so LabelMaker interferes with alpha blending, the texture (created from a PNG with a transparent background) now has a solid black background, rather than a transparent background. If I comment out all the LabelMaker-related code, it works fine.

    Read the article

  • What are good sites that provide free media resources for hobby game development?

    - by m_oLogin
    Please redirect me if this is a duplicate. I haven't been able to find a suitable question. I really suck at graphics / music / 3D modeling / animation and it's a must-have when you have a hundred hobby game development projects you're working on. I'm looking for different quality sources on the web that provide free resources. [EDIT] Some resources given by the answers: (I'll complete it with time) MUSIC Jamendo (need to ask for permission for uses) OpSound SOUND EFFECTS FreeSound StoneWashed SPRITES LostGarden The protagonist domain Reiner's Tilesets (also contains a couple of 3D models OpenGameArt (beta, not many resources but promising) Flying Yogi ANIMATED SPRITES The Spriters Resource MODELS archive3d TurboSquid 3Dvia Google Sketchup ShareCG Gamasutraexchange.com ANIMATED MODELS TurboSquid TEXTURES CG Textures OpenFrag Other precompiled lists FreeGameDev.net

    Read the article

  • From a 3D modeler to an iPhone app - what are best practices?

    - by bonkey
    I am quite new in 3D programming on iPhone and I would like to ask for hints about organizing a work between designers and programmers on that platform. Most of all: what kind of tools, libraries or plugins cooperate the best on both sides. Although I consider the question as looking for general best-practices advice I would like to find a solution for my current situation which I describe further, too. I've already done some research and found following libraries: SIO2 Khronos OpenGL ES 1.x SDK for PowerVR MBX Unity3D Oolong Game Engine I've checked modellers or plugins to them giving output formats readable by those tools: obj2opengl Wavefront OBJ to plain header file converter Blender with SIO2 exporter iphonewavefrontloader Cheetah3D PVRGeoPOD for 3DS / Maya Unfortunately I still have no clear vision how to combine any of that tools to get a desinger's work in an application. I look for a way of getting it in the most possible complete way: models, lights, scenes, textures, maybe some simple animations (but rather no game-like physics), but I still got nothing. And here comes my situation: I would like to find right way to present few (but quite complicated) models from a single scene. The designers mostly use 3DS Max 9, sometimes 10 (which partly prevents using PVRGeoPOD) and are rather reluctant to switch to something else but if there's no other choice I suppose it would be possible. The basic rule I've already found in some places "use Wavefront OBJ" not always works. I haven't got any acceptable results with production files, actually. The only things worked fine were some mere examples. Some of my models did imported incomplete, sometimes exporters hung or generated enormous files not really useful on an iPhone, sometimes enabling textures (with GL_TEXTURE_2D) just crashed an app. I know it might be a problem with too complicated models or my mistakes coming from inexeperience but I am not able to find any guidelines for that process to have streamlined cooperation with designers. I am even willing to write some things from scratch in pure OpenGL-ES if it's necessary, but I would like to avoid what might be avoided and get the most from the model files. The best would be the effect I saw on some SIO2 tutorials: export, build & go. But at that moment I've got only "import, wrong", "import, where are textures?", "import, that almost looks fine, export, hang" and so on... Is it really so much frustrating or I am just missed something obvious? Can anybody share his/her experience in that field and tell what kind of software uses for "making things happen"?

    Read the article

  • resolve maya crash

    - by knishua
    Hi, I have a file which is crashing while rendering. File contains 50+ reference nodes. polycount : (lks)2,59,49,150; textures : (ths)2,628. It is being rendered in mental ray. All textures are .map. The attached image has the machine cofiguration with page file. here is the log file when the file crashes. what is the line in the log file mean. How to solve this. Exception code: C0000006: IN_PAGE_ERROR Fault address: 7814E420 in C:\WINDOWS\WinSxS\amd64_Microsoft.VC80.CRT_1fc8b3b 9a1e18e3b_8.0.50727.762_x-ww_9D1C6CE0\MSVCR80.dll what does the above mean. how to resolve this. Brgds, kNish

    Read the article

  • How to add texture to fill colors in ggplot2?

    - by rhh
    I'm currently using scale_brewer for fill and these look beautiful in color (on screen and via color printer) but print relatively uniformly as greys when using a black and white printer. I searched the online ggplot2 documentation but didn't see anything about adding textures to fill colors. Is there an official ggplot2 way to do this or does anyone have a hack that they use? By textures I mean things like diagonal bars, reverse diagonal bars, dot patterns, etc that would differentiate fill colors when printed in black and white. Thanks for thoughts! Robert

    Read the article

  • Modifying a model and texture mid-game code

    - by MicroPirate
    Just have a question for anyone out there who knows some sort of game engine pretty well. What I am trying to implement is some sort of script or code that will allow me to make a custom game character and textures mid-game. A few examples would be along the lines of changing facial expressions and body part positions in the game SecondLife. I don't really need a particular language, feel free to use your favorite, I'm just really looking for an example on how to go about this. Also I was wondering if there is anyway to combine textures for optimization; for example if i wanted to add a tattoo to a character midgame, is there any code that could combine his body texture and the tattoo texture into one texture to use (this way I can simply just render one texture per body.) Any tips would be appreciated, sorry if the question is a wee bit to vauge.

    Read the article

  • How do I make lines scale when using GLOrtho?

    - by Mason Wheeler
    I'm using GLOrtho to set up a 2D view that I can render textures onto. It works really well, up until I try to zoom in on the image. If I pass half the width and half the height of the viewport to GLOrtho, I end up with all my textures displayed twice as big as normal, which is exactly what I expect. But then I try to draw a box around part of the image and it all falls apart. I call glBegin(GL_LINE_LOOP), place the four vertices, and call glEnd, and I expect to see the same thing I would see if I drew it at normal zoom level, doubled. Instead, I get lines that are all the right length, but they all come out one pixel wide, instead of two, and it looks really bad. What am I missing?

    Read the article

  • Gradients and memory

    - by user146780
    I'm creating a drawing application with OpenGL. I'v created an algorithm that generates gradient textures. I then map these to my polygons and this works quite well. What I realized is how much memory this requires. Creating 1000 gradients takes about 800MB and that's way too much. Is there an alternative to textures, or a way to compress them, or another way to map gradients to polygons that doesn't use up as much memory? Thanks My polygons are concave, I use GLUTesselator, and they are multicolored and point to point

    Read the article

  • 3d / 2.5d game library for iphone and pc

    - by Aaron Qian
    I'm trying to make a 2d shooter game with 3d background. The player and enemies are essentially just quads with textures. The background will be simple 3d polygons with textures and some fog and light. Therefore, I don't need a really powerful 3d library. I tried Unity3D and Torque2D, but I don't like to use their GUI editors. I prefer to work with code. So, is there a cross platform (mainly windows and iPhone) 3d / 2.5d game library, commercial or open source? I assume it will be only limited to c, c++, and object-c due to apple's new ToS.

    Read the article

  • Android Graphics Memory Limits

    - by Gordon
    I am creating an android game using opengl and a cocos2d port (http://code.google.com/p/cocos2d-android-1). I am targeting a wide range of devices and want to ensure that it performs well. I only test on a nexus one and am hoping to get some input from people with experience on slower devices. Currently the game uses two 1024x1024 textures as well as two 256x256 textures. Is this within the limits of most devices? Anyone have any rule of thumb or experience with graphics memory limits in these cases? If gfx memory is exceeded does it page to normal memory?

    Read the article

  • Unity3D draw call optimization : static batching VS manually draw mesh with MaterialPropertyBlock

    - by Heisenbug
    I've read Unity3D draw call batching documentation. I understood it, and I want to use it (or something similar) in order to optimize my application. My situation is the following: I'm drawing hundreds of 3d buildings. Each building can be represented using a Mesh (or a SubMesh for each building, but I don't thing this will affect performances) Each building can be textured with several combinations of texture patterns(walls, windows,..). Textures are stored into an Atlas for optimizaztion (see Texture2d.PackTextures) Texture mapping and facade pattern generation is done in fragment shader. The shader can be the same (except for few values) for all buildings, so I'd like to use a sharedMaterial in order to optimize parameters passed to the GPU. The main problem is that, even if I use an Atlas, share the material, and declare the objects as static to use static batching, there are few parameters(very fews, it could be just even a float I guess) that should be different for every draw call. I don't know exactly how to manage this situation using Unity3D. I'm trying 2 different solutions, none of them completely implemented. Solution 1 Build a GameObject for each building building (I don't like very much the overhead of a GameObject, anyway..) Prepare each GameObject to be static batched with StaticBatchingUtility.Combine. Pack all texture into an atlas Assign the parent game object of combined batched objects the Material (basically the shader and the atlas) Change some properties in the material before drawing an Object The problem is the point 5. Let's say I have to assign a different id to an object before drawing it, how can I do this? If I use a different material for each object I can't benefit of static batching. If I use a sharedMaterial and I modify a material property, all GameObjects will reference the same modified variable Solution 2 Build a Mesh for every building (sounds better, no GameObject overhead) Pack all textures into an Atlas Draw each mesh manually using Graphics.DrawMesh Customize each DrawMesh call using a MaterialPropertyBlock This would solve the issue related to slightly modify material properties for each draw call, but the documentation isn't clear on the following point: Does several consecutive calls to Graphic.DrawMesh with a different MaterialPropertyBlock would cause a new material to be instanced? Or Unity can understand that I'm modifying just few parameters while using the same material and is able to optimize that (in such a way that the big atlas is passed just once to the GPU)?

    Read the article

  • Pixel Shader, YUV-RGB Conversion failing

    - by TomTom
    I am tasked with playing back a video hthat comes in in a YUV format as an overlay in a larger game. I am not a specialist in Direct3d, so I am struggling. I managed to get a shader working and am rendering 3 textures (Y, V, U). Sadly I am totally unable to get anything like a decent image. Documentation is also failing me. I am currently loading the different data planes (Y,V,U) in three different textures: m_Textures = new Texture[3]; // Y Plane m_Textures[0] = new Texture(m_Device, w, h, 1, Usage.None, Format.L8, Pool.Managed); // V Plane m_Textures[1] = new Texture(m_Device, w2, h2, 1, Usage.None, Format.L8, Pool.Managed); // U Plane m_Textures[2] = new Texture(m_Device, w2, h2, 1, Usage.None, Format.L8, Pool.Managed); When I am rendering them as R, G and B respectively with the following code: float4 Pixel( float2 texCoord: TEXCOORD0) : COLOR0 { float y = tex2D (ytexture, texCoord); float v = tex2D (vtexture, texCoord); float u = tex2D (utexture, texCoord); //R = Y + 1.140 (V -128) //G = Y - 0.395 (U-128) - 0.581 (V-128) //B = Y + 2.028 (U-128) float r = y; //y + 1.140 * v; float g = v; //y - 0.395 * u - 0.581 * v; float b = u; //y + 2.028 * u; float4 result; result.a = 255; result.r = r; //clamp (r, 0, 255); result.g = g; //clamp (g, 0, 255); result.b = b; //clamp (b, 0, 255); return result; } Then the resulting image is - quite funny. I can see the image, but colors are totally distorted, as it should be. The formula I should apply shows up in the comment of the pixel shader, but when I do it, the resulting image is pretty brutally magenta only. This gets me to the question - when I read out an L8 texture into a float, with float y = tex2D (ytexture, texCoord); what is the range of values? The "origin" values are 1 byte, 0 to 255, and the forum I have assumes this. Naturally I am totally off when the values returned are somehow normalized. My Clamp operation at the end also will fail if for example colors in a pixel shader are normalized 0 to 1. Anyone an idea how that works? Please point me also to documentation - I have not found anything in this regard.

    Read the article

  • Sharing data between graphics and physics engine in the game?

    - by PolGraphic
    I'm writing the game engine that consists of few modules. Two of them are the graphics engine and the physics engine. I wonder if it's a good solution to share data between them? Two ways (sharing or not) looks like that: Without sharing data GraphicsModel{ //some common for graphics and physics data like position //some only graphic data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel{ //some common for graphics and physics data like position //some only physics data //usually my physics data contains A LOT more informations than graphics data } engine3D->createModel3D(...); physicsEngine->createModel3D(...); //connect graphics and physics data //e.g. update graphics model's position when physics model's position will change I see two main problems: A lot of redundant data (like two positions for both physics and graphics data) Problem with updating data (I have to manually update graphics data when physics data changes) With sharing data Model{ //some common for graphics and physics data like position }; GraphicModel : public Model{ //some only graphics data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel : public Model{ //some only physics data //usually my physics data contains A LOT more informations than graphics data } model = engine3D->createModel3D(...); physicsEngine->assingModel3D(&model); //will cast to //PhysicsModel for it's purposes?? //when physics changes anything (like position) in model //(which it treats like PhysicsModel), the position for graphics data //will change as well (because it's the same model) Problems here: physicsEngine cannot create new objects, just "assing" existing ones from engine3D (somehow it looks more anti-independent for me) Casting data in assingModel3D function physicsEngine and graphicsEngine must be careful - they cannot delete data when they don't need them (because second one may need it). But it's rare situation. Moreover, they can just delete the pointer, not the object. Or we can assume that graphicsEngine will delete objects, physicsEngine just pointers to them. Which way is better? Which will produce more problems in the future? I like the second solution more, but I wonder why most graphics and physics engines prefer the first one (maybe because they normally make only graphics or only physics engine and somebody else connect them in the game?). Have they any more hidden pros & contras?

    Read the article

  • Loading SpriteFont through a different class than Game.cs

    - by MintyAnt
    I am trying to load up a single SpriteFont to print some debug information. In our current game, we load up both Textures and Music through a ResourceManager. They are both loaded with a filestream, and thus do not require Content.Load SoundEffect soundEffect = SoundEffect.FromStream( fs ); Since this ResourceManager does not inherit from Game or is like Game.cs, I cannot use the usual method: SpriteFont spriteFont = Content.Load<SpriteFont>(resource.Key.Item2); Anyone have any idea how I can either: -Load the SpriteFont a different way -Create my own Contentmanager

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >