Search Results

Search found 1308 results on 53 pages for 'texture'.

Page 39/53 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • What is the easiest and fastest way to display an SDL_Surface in a window with SDL2?

    - by Semmu
    I would like to have an SDL_Surface representing the contents of the window, just like in the old days with SDL1.2. What is the best and fastest way to do it in SDL2? What I found is that I need an SDL_Window, an SDL_Renderer for that window, an SDL_Texture to render, and an SDL_Surface to create a texture from. This seems a bit too much to me, since I just want to display a single image on the screen. Not to mention the impact on the performance. On my machine (Lenovo Y510p laptop) this whole procedure takes 9ms, without any memory allocation, only using pre-allocated variables and totally black SDL_Surface. Is there a way I could speed up things?

    Read the article

  • devIL causes program to be unable to start correctly

    - by Mark
    I just tried to use devIL and ULIT to help me with opengl texture loading. However, whenever the program starts, I get the error: "The application was unable to start correctly (0xc000007b). Click OK to close the application." What happened? I'm using the Visual C++ 2010 RC, windows 7 64-bit.

    Read the article

  • iPhone cocos2d CCTMXTiledMap tile transitions

    - by Jeff Johnson
    I am using cocos2d on the iPhone and am wondering if it is possible to use a texture mask in order to create tile transitions / fringe layer. For example, a grass tile and a dirt tile, I would want a tile that had both grass and dirt in it... Has anyone done this, or is the only way to create one tile for every possible transition?

    Read the article

  • Unloading vertex buffers in OpenGL

    - by Jeremy Statz
    I have an Android live wallpaper that I suspect is leaking memory, probably either textures or vertex arrays. I'm calling glDeleteTextures on my texture IDs, but don't see any sort of equivalent for my vertex buffers. I'd like to be able to be sure both my textures and buffers are getting unloaded by OpenGL, am i missing something? The documents I've found seem to suggest OpenGL just works it out on its own, but that's not giving me a lot of comfort.

    Read the article

  • Using Effect For Fog of War

    - by Qua
    I'm trying to apply fog of war to areas on the screen not currently visible to the player. I do this by rendering the game content in one RenderTarget and the the fog of war into another, and then I merge them with an effect file that takes the color from the game RenderTarget and the alpha from the fog of war render target. The FOW RenderTarget is black where the FOW appears, and white where it doesn't. This does work, but it colors the fog of war (the unrevealed locations) white instead of the intended color of black. Before applying the effect I clear the backbuffer of the device to white. When I try to clear it to black, non of the fog of war appears at all, which I assume is a product of alpha blending with black. It works for all other colors, however - giving the resulting screen a tint of that color. How do I archieve a black fog while still being able to do alpha blending between the two render targets? The rendering code for applying the FOW: private RenderTarget2D mainTarget; private RenderTarget2D lightTarget; private void CombineRenderTargetsAndDraw() { batch.GraphicsDevice.SetRenderTarget(null); batch.GraphicsDevice.Clear(Color.White); fogOfWar.Parameters["LightsTexture"].SetValue(lightTarget); batch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend); fogOfWar.CurrentTechnique.Passes[0].Apply(); batch.Draw( mainTarget, new Rectangle(0, 0, batch.GraphicsDevice.PresentationParameters.BackBufferWidth, batch.GraphicsDevice.PresentationParameters.BackBufferHeight), Color.White ); batch.End(); } The effect file I'm using to apply the FOW: texture LightsTexture; sampler ColorSampler : register(s0); sampler LightsSampler = sampler_state{ Texture = <LightsTexture>; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; }; float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 tex = input.TexCoord; float4 color = tex2D(ColorSampler, tex); float4 alpha = tex2D(LightsSampler, tex); return float4(color.r, color.g, color.b, alpha.r); } technique Technique1 { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • Flash10 Triangle Rendering

    - by anon
    I know about Papervision 3D. However, alot of the realism there comes from textures. Does anyone know of a benchmark that shows how many single-color, flash-shaded 3D triagnels flash10 can reasonably render? I can't find this benchmark online or an engine for this (most seems to really value bitmaps / texture). Thanks!

    Read the article

  • Freetype2 failing under WoW64

    - by Necrolis
    I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free. I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own): FT_Face pFace = NULL; FT_Error nError = 0; FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize)); if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0) { FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96); for(unsigned char c = 0; c < 95; c++) { if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER)) { FT_Glyph pGlyph; if(!FT_Get_Glyph(pFace->glyph,&pGlyph)) { LOG("GET: %c",c + 32); FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1); FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph); FT_Bitmap* pBitmap = &pGlyphMap->bitmap; const size_t nWidth = pBitmap->width; const size_t nHeight = pBitmap->rows; //add to texture atlas } } } } else { FT_Done_Face(pFace); delete pFont; return FALSE; } FT_Done_Face(pFace); delete pFont; return TRUE; } ARCHIVE_LoadFile returns blocks allocated with new. As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area. Update After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test. Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however. Update 2 After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

    Read the article

  • How do I destruct data associated with an object after the object no longer exists?

    - by Phineas
    I'm creating a class (say, C) that associates data (say, D) with an object (say, O). When O is destructed, O will notify C that it soon will no longer exist :( ... Later, when C feels it is the right time, C will let go of what belonged to O, namely D. If D can be any type of object, what's the best way for C to be able to execute "delete D;"? And what if D is an array of objects? My solution is to have D derive from a base class that C has knowledge of. When the time comes, C calls delete on a pointer to the base class. I've also considered storing void pointers and calling delete, but I found out that's undefined behavior and doesn't call D's destructor. I considered that templates could be a novel solution, but I couldn't work that idea out. Here's what I have so far for C, minus some details: // This class is C in the above description. There may be many instances of C. class Context { public: // D will inherit from this class class Data { public: virtual ~Data() {} }; Context(); ~Context(); // Associates an owner (O) with its data (D) void add(const void* owner, Data* data); // O calls this when he knows its the end (O's destructor). // All instances of C are now aware that O is gone and its time to get rid // of all associated instances of D. static void purge (const void* owner); // This is called periodically in the application. It checks whether // O has called purge, and calls "delete D;" void refresh(); // Side note: sometimes O needs access to D Data *get (const void *owner); private: // Used for mapping owners (O) to data (D) std::map _data; }; // Here's an example of O class Mesh { public: ~Mesh() { Context::purge(this); } void init(Context& c) const { Data* data = new Data; // GL initialization here c.add(this, new Data); } void render(Context& c) const { Data* data = c.get(this); } private: // And here's an example of D struct Data : public Context::Data { ~Data() { glDeleteBuffers(1, &vbo); glDeleteTextures(1, &texture); } GLint vbo; GLint texture; }; }; P.S. If you're familiar with computer graphics and VR, I'm creating a class that separates an object's per-context data (e.g. OpenGL VBO IDs) from its per-application data (e.g. an array of vertices) and frees the per-context data at the appropriate time (when the matching rendering context is current).

    Read the article

  • Why is this OpenGL ES code slow on iPhone?

    - by f3r3nc
    I've slightly modified the iPhone SDK's GLSprite example while learning OpenGL ES and it turns out to be quite slow. Even in the simulator (on the hw worst) so I must be doing something wrong since it's only 400 textured triangles. const GLfloat spriteVertices[] = { 0.0f, 0.0f, 100.0f, 0.0f, 0.0f, 100.0f, 100.0f, 100.0f }; const GLshort spriteTexcoords[] = { 0,0, 1,0, 0,1, 1,1 }; - (void)setupView { glViewport(0, 0, backingWidth, backingHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0.0f, backingWidth, backingHeight,0.0f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glClearColor(0.3f, 0.0f, 0.0f, 1.0f); glVertexPointer(2, GL_FLOAT, 0, spriteVertices); glEnableClientState(GL_VERTEX_ARRAY); glTexCoordPointer(2, GL_SHORT, 0, spriteTexcoords); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // sprite data is preloaded. 512x512 rgba8888 glGenTextures(1, &spriteTexture); glBindTexture(GL_TEXTURE_2D, spriteTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glEnable(GL_TEXTURE_2D); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); } - (void)drawView { .. glClear(GL_COLOR_BUFFER_BIT); glLoadIdentity(); glTranslatef(tx-100, ty-100,10); for (int i=0; i<200; i++) { glTranslatef(1, 1, 0); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } .. } drawView is called every time the screen is touched or the finger on the screen is moved and tx,ty are set to the x,y coordinates where that touch happened. I've also tried using GLBuffer, when translation was pre-generated and there was only one DrawArray but gave the same performance (~4 FPS). ===EDIT=== Meanwhile I've modified this so that much smaller quads are used (sized: 34x20) and much less overlapping is done. There are ~400 quads-800 triangles spread on the whole screen. Texture size is 512x512 atlas and RGBA_8888 while the texture coordinates are in float. The code is very ugly in terms of API efficiency: there are two MatrixMode change along with two loads and two translation then a drawarrays for a triangle strip (quad). Now this produces ~45 FPS.

    Read the article

  • Different function returns from command line and within function

    - by Myx
    Hello: I have an extremely bizzare situation: I have a function in MATLAB which calls three other main functions and produces two figures for me. The function reads in an input jpeg image, crops it, segments it using kmeans clustering, and outputs 2 figures to the screen - the original image and the clustered image with the cluster centers indicated. Here is the function in MATLAB: function [textured_avg_x photo_avg_x] = process_database_images() clear all warning off %#ok type_num_max = 3; % type is 1='texture', 2='graph', or 3='photo' type_num_max = 1; img_max_num_photo = 100; % 400 photo images img_max_num_other = 100; % 100 textured, and graph images for type_num = 1:2:type_num_max if(type_num == 3) img_num_max = img_max_num_photo; else img_num_max = img_max_num_other; end img_num_max = 1; for img_num = 1:img_num_max [type img] = load_image(type_num, img_num); %img = imread('..\images\445.jpg'); img = crop_image(img); [IDX k block_bounds features] = segment_image(img); end end end The function segment_image first shows me the color image that was passed in, performs kmeans clustering, and outputs the clustered image. When I run this function on a particular image, I get 3 clusters (which is not what I expect to get). When I run the following commands from the MATLAB command prompt: >> img = imread('..\images\texture\1.jpg'); >> img = crop_image(img); >> segment_image(img); then the first image that is displayed by segment_image is the same as when I run the function (so I know that the clustering is done on the same image) but the number of clusters is 16 (which is what I expect). In fact, when I run my process_database_images() function on my entire image database, EVERY image is evaluated to have 3 clusters (this is a problem), whereas when I test some images individually, I get in the range of 12-16 clusters, which is what I prefer and expect. Why is there such a discrepancy? Am I having some syntax bug in my process_database_images() function? If more code is required from me (i.e. segment_images function, or crop_image function), please let me know. Thanks.

    Read the article

  • MATLAB: different function returns from command line and within function

    - by Myx
    Hello: I have an extremely bizzare situation: I have a function in MATLAB which calls three other main functions and produces two figures for me. The function reads in an input jpeg image, crops it, segments it using kmeans clustering, and outputs 2 figures to the screen - the original image and the clustered image with the cluster centers indicated. Here is the function in MATLAB: function [textured_avg_x photo_avg_x] = process_database_images() clear all warning off %#ok type_num_max = 3; % type is 1='texture', 2='graph', or 3='photo' type_num_max = 1; img_max_num_photo = 100; % 400 photo images img_max_num_other = 100; % 100 textured, and graph images for type_num = 1:2:type_num_max if(type_num == 3) img_num_max = img_max_num_photo; else img_num_max = img_max_num_other; end img_num_max = 1; for img_num = 1:img_num_max [type img] = load_image(type_num, img_num); %img = imread('..\images\445.jpg'); img = crop_image(img); [IDX k block_bounds features] = segment_image(img); end end end The function segment_image first shows me the color image that was passed in, performs kmeans clustering, and outputs the clustered image. When I run this function on a particular image, I get 3 clusters (which is not what I expect to get). When I run the following commands from the MATLAB command prompt: >> img = imread('..\images\texture\1.jpg'); >> img = crop_image(img); >> segment_image(img); then the first image that is displayed by segment_image is the same as when I run the function (so I know that the clustering is done on the same image) but the number of clusters is 16 (which is what I expect). In fact, when I run my process_database_images() function on my entire image database, EVERY image is evaluated to have 3 clusters (this is a problem), whereas when I test some images individually, I get in the range of 12-16 clusters, which is what I prefer and expect. Why is there such a discrepancy? Am I having some syntax bug in my process_database_images() function? If more code is required from me (i.e. segment_images function, or crop_image function), please let me know. Thanks.

    Read the article

  • Procedural modeling of Robots?

    - by anon
    Procedural techniques is common for texture synthesis, modeling plants, and modeling terrains. However, I've seen very little work on algorithmic construction of robots, which is a bit surprising given how mechanical these systems are. Anyone have a good resource on the algorithmic construction of robots / robotic humanoids? Thanks!

    Read the article

  • user defined Copy ctor, and copy-ctors further down the chain - compiler bug ? programmers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D EDIT : more info : typedef ParticleId unsigned int; Particle has no user defined copyctor, but has a member of type : Particle { .... Resource<Texture> tex_res; ... } Resource is a smart-pointer class, and has all ctor's defined (also asignment operator) and it seems that Resource is copied bitwise. EDIT : henrik solved it... data(data) is stupid of course ! it should of course be rhs.data !!! sorry for huge amount of code, with a very little bug in it !!! (Guess you shouldn't code at 1 in the morning :D )

    Read the article

  • Does OpenGL ES support environment shaders?

    - by Soviut
    I want to make metallic 3d object that appears to be reflective. I want to accomplish this using an environment shader that uses either a sphere or cube map that I can assign an image or texture as the "reflection" source. Does OpenGL ES on the iPhone support this in any versions?

    Read the article

  • How to detect OpenGL capabilities without creating a GLSurfaceView (Android)

    - by ADB
    I am trying to access the OpenGL capability of the phone before deciding whether to use OpenGL or Canvas for graphics puposes. However, all the functions that I can read documentation on requires you to already have a valid OpenGL context (namely, create a GLSurfaceView and assign it a rendered. Then check the OpenGL parameters in the onSurfaceCreated). So, is there a way to check the extensions, renderer name and max texture size capability of the phone BEFORE having to create any OpenGL views?

    Read the article

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • Navigating MainMenu with arrow keys or controller

    - by Phil Royer
    I'm attempting to make my menu navigable with the arrow keys or via the d-pad on a controller. So Far I've had no luck. The question is: Can someone walk me through how to make my current menu or any libgdx menu keyboard accessible? I'm a bit noobish with some stuff and I come from a Javascript background. Here's an example of what I'm trying to do: http://dl.dropboxusercontent.com/u/39448/webgl/qb/qb.html For a simple menu that you can just add a few buttons to and it run out of the box use this: http://www.sadafnoor.com/blog/how-to-create-simple-menu-in-libgdx/ Or you can use my code but I use a lot of custom styles. And here's an example of my code: import aurelienribon.tweenengine.Timeline; import aurelienribon.tweenengine.Tween; import aurelienribon.tweenengine.TweenManager; import com.badlogic.gdx.Game; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.Screen; import com.badlogic.gdx.graphics.GL20; import com.badlogic.gdx.graphics.Texture; import com.badlogic.gdx.graphics.g2d.Sprite; import com.badlogic.gdx.graphics.g2d.SpriteBatch; import com.badlogic.gdx.graphics.g2d.TextureAtlas; import com.badlogic.gdx.math.Vector2; import com.badlogic.gdx.scenes.scene2d.Actor; import com.badlogic.gdx.scenes.scene2d.InputEvent; import com.badlogic.gdx.scenes.scene2d.InputListener; import com.badlogic.gdx.scenes.scene2d.Stage; import com.badlogic.gdx.scenes.scene2d.ui.Skin; import com.badlogic.gdx.scenes.scene2d.ui.Table; import com.badlogic.gdx.scenes.scene2d.ui.TextButton; import com.badlogic.gdx.scenes.scene2d.utils.Align; import com.badlogic.gdx.scenes.scene2d.utils.ClickListener; import com.project.game.tween.ActorAccessor; public class MainMenu implements Screen { private SpriteBatch batch; private Sprite menuBG; private Stage stage; private TextureAtlas atlas; private Skin skin; private Table table; private TweenManager tweenManager; @Override public void render(float delta) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); menuBG.draw(batch); batch.end(); //table.debug(); stage.act(delta); stage.draw(); //Table.drawDebug(stage); tweenManager.update(delta); } @Override public void resize(int width, int height) { menuBG.setSize(width, height); stage.setViewport(width, height, false); table.invalidateHierarchy(); } @Override public void resume() { } @Override public void show() { stage = new Stage(); Gdx.input.setInputProcessor(stage); batch = new SpriteBatch(); atlas = new TextureAtlas("ui/atlas.pack"); skin = new Skin(Gdx.files.internal("ui/menuSkin.json"), atlas); table = new Table(skin); table.setBounds(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Set Background Texture menuBackgroundTexture = new Texture("images/mainMenuBackground.png"); menuBG = new Sprite(menuBackgroundTexture); menuBG.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Create Main Menu Buttons // Button Play TextButton buttonPlay = new TextButton("START", skin, "inactive"); buttonPlay.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new LevelMenu()); } }); buttonPlay.addListener(new InputListener() { public boolean keyDown (InputEvent event, int keycode) { System.out.println("down"); return true; } }); buttonPlay.padBottom(12); buttonPlay.padLeft(20); buttonPlay.getLabel().setAlignment(Align.left); // Button EXTRAS TextButton buttonExtras = new TextButton("EXTRAS", skin, "inactive"); buttonExtras.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new ExtrasMenu()); } }); buttonExtras.padBottom(12); buttonExtras.padLeft(20); buttonExtras.getLabel().setAlignment(Align.left); // Button Credits TextButton buttonCredits = new TextButton("CREDITS", skin, "inactive"); buttonCredits.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Credits()); } }); buttonCredits.padBottom(12); buttonCredits.padLeft(20); buttonCredits.getLabel().setAlignment(Align.left); // Button Settings TextButton buttonSettings = new TextButton("SETTINGS", skin, "inactive"); buttonSettings.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Settings()); } }); buttonSettings.padBottom(12); buttonSettings.padLeft(20); buttonSettings.getLabel().setAlignment(Align.left); // Button Exit TextButton buttonExit = new TextButton("EXIT", skin, "inactive"); buttonExit.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { Gdx.app.exit(); } }); buttonExit.padBottom(12); buttonExit.padLeft(20); buttonExit.getLabel().setAlignment(Align.left); // Adding Heading-Buttons to the cue table.add().width(190); table.add().width((table.getWidth() / 10) * 3); table.add().width((table.getWidth() / 10) * 5).height(140).spaceBottom(50); table.add().width(190).row(); table.add().width(190); table.add(buttonPlay).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExtras).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonCredits).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonSettings).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExit).width(460).height(110); table.add().row(); stage.addActor(table); // Animation Settings tweenManager = new TweenManager(); Tween.registerAccessor(Actor.class, new ActorAccessor()); // Heading and Buttons Fade In Timeline.createSequence().beginSequence() .push(Tween.set(buttonPlay, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExtras, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonCredits, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonSettings, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExit, ActorAccessor.ALPHA).target(0)) .push(Tween.to(buttonPlay, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExtras, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonCredits, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonSettings, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExit, ActorAccessor.ALPHA, .5f).target(1)) .end().start(tweenManager); tweenManager.update(Gdx.graphics.getDeltaTime()); } public static Vector2 getStageLocation(Actor actor) { return actor.localToStageCoordinates(new Vector2(0, 0)); } @Override public void dispose() { stage.dispose(); atlas.dispose(); skin.dispose(); menuBG.getTexture().dispose(); } @Override public void hide() { dispose(); } @Override public void pause() { } }

    Read the article

  • GLKit Memory Leak copywithZone

    - by TommyT39
    Running the instruments utility against the game I'm writing shows a bunch of memory leaks related to copy with Zone when I cycle through an array and draw some simple cube objects. Im not sure the best way to track this down as I'm new to OpenGL programming. My program is using ARC and is set to build for IOS 5. I am initializing GLKit to use OPenGl 2.0 and using the BafeEffect so I don't have to write my own shaders etc.. This shouldn't be rocket science. Im guessing that I must be not releasing something within the draw function. Below is the code to my draw function. Could you guys take a look and see if anything stands out as the problem? One other thing to note is that I'm using 15 different textures, the cubes can be 1 of 15 different ones. I have a property set on the cube class for the texture and I set it as I create the cube in there array. But I do load all 15 when my programs view did load starts.They are small .jps files that are less than 75k each and each cube uses the same texture all the way around so shouldn't be too big of an issue. Here is the code to my draw function: - (void)draw { GLKMatrix4 xRotationMatrix = GLKMatrix4MakeXRotation(rotation.x); GLKMatrix4 yRotationMatrix = GLKMatrix4MakeYRotation(rotation.y); GLKMatrix4 zRotationMatrix = GLKMatrix4MakeZRotation(rotation.z); GLKMatrix4 scaleMatrix = GLKMatrix4MakeScale(scale.x, scale.y, scale.z); GLKMatrix4 translateMatrix = GLKMatrix4MakeTranslation(position.x, position.y, position.z); GLKMatrix4 modelMatrix = GLKMatrix4Multiply(translateMatrix,GLKMatrix4Multiply(scaleMatrix,GLKMatrix4Multiply(zRotationMatrix, GLKMatrix4Multiply(yRotationMatrix, xRotationMatrix)))); GLKMatrix4 viewMatrix = GLKMatrix4MakeLookAt(0, 0, 1, 0, 0, -5, 0, 1, 0); effect.transform.modelviewMatrix = GLKMatrix4Multiply(viewMatrix, modelMatrix); effect.transform.projectionMatrix = GLKMatrix4MakePerspective(0.125*M_TAU, 1.0, 2, 0); effect.texture2d0.name = wallTexture.name; [effect prepareToDraw]; glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, triangleVertices); glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, 18); glDisableVertexAttribArray(GLKVertexAttribPosition); glDisableVertexAttribArray(GLKVertexAttribTexCoord0); }

    Read the article

  • Stuttering animation in iPhone OpenGL ES although fps is high

    - by guymic
    I am building a 2d OpenGL es application For iPad it displays a background texture and numerous textures on top of it which are always in motion. Every frame their location is recalculated based on time delta and speed and the entire thing is being rendered at 60 fps successfully, but still as the movement speed of the sprites raises, thing look stuttering. Any ideas? Are there inherit problems with what I'm doing? Are there known design patterns for smooth animation?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >