Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 479/1022 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Why are trees shining in background?

    - by Kinected
    Currently I am creating a forest scene in the dark, and the trees are shining far away, but when I get close they are fine. I have the shaders set to "Nature/Tree Soft Occlusion [bark/leaves]", but they are still rendering strange far away, but close they are fine. I tried placing the trees in a folder named "Ambient-Occlusion" like said here, but no luck. Also fog is turned off. Thanks in advance.

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • Rendering scaled-down card images

    - by user1065145
    I have high-quality SVG card images, but they drastically lose their quality when I downsize them. I have tried two ways of rendering cards (using Inkscape and Imagemagics): 1) Render SVG to high-res PNG and resize it then: inkscape -D --export-png=QS1024.png --export-width=1024 QS.svg convert QS1024.png -filter Lanczos -sampling-factor 1x1 -resize 71x QS71.png 2) Render SVG to image of proper size at once: inkscape -D --export-png=QS71.png --export-width=71 QS.svg Both approaches generate blurry card images, which looks even worse than old Windows cards. What are the best way to generate smaller card images from SVG sources and not to loose their quality a lot? UPDATE: I am using Inkscape to render SVG - PNG and ImageMagick then to downsize PNG. I've tried using convert -resize with couple of filters (Lanczos/Mitchell/etc), but result was pretty much the same. Original: 71x raster:

    Read the article

  • Algorithms for rainfall + river creation in procedurally generated terrain

    - by Peck
    I've recently become fascinated by the things that can be done with procedurally terrain and have started experimenting with world building a bit. I'd like to be able to make worlds something like Dwarf fortress with biomes created from meshing together various maps. So first step has been done. Using the diamond-square algorithm I've created some nice hieghtmaps. Next step is I would like to add some water features and have them somewhat realistically generated with rainfall. I've read about a few different approaches such as starting at the high points of the map, and "stepping" down to the lowest neighboring point, pooling/eroding as it works its way down to sea level. Are there any documented algorithms with this or are they more off the cuff? Would love any advice/thoughts.

    Read the article

  • Tiled Editor: How is this Map Handling Collision?

    - by user2736286
    BrowserQuest map in question. From what I understand, with tiled, there are two main ways to specify collision: Create an object layer, and interpret the shapes in the engine as collision objects. Create a tiled layer, and make all tiles in the layer have a collision property, and interpret all tiles in the layer as collision objects. I'm using BrowserQuest as a big source of inspiration for my project, and I want to know how they handled collision on the level editing side. I've checked through all their layers, expecting an object layer to be handling cliff collision like: But there are no such object layers to be found. Furthermore, the tile layers containing the tiles for such cliffs have no properties at all, meaning that they didn't just specify "collision" for such tile layers. I especially need to know how they handled less rectangular shapes like: I could imagine that they are not using explicit collision layers, but instead determining collision in the actual engine, based off the presence of specific tile layer sprites. Only because BrowserQuest has whole-tile movement, and it wouldn't look too odd if a small apple, taking up only a fraction of the tile size, prevents movement over that entire tile. But I'm creating a game with more precise movement, so collision has to be tight to the apple, and I really want to know how BrowserQuest approached collision defining. If anyone knowledgeable with Tiled could take a quick look at the map, I'd appreciate it! I'm tearing my hair out here :). Thanks

    Read the article

  • In MMO game, how to handle user characters, who are offline?

    - by Deele
    In my medieval MMO game, players have their own character, that represents themselves inside game. Like a King. Players could have cities and armies, but King acts as main driving force. Then it comes to player, going offline/vacation/disconnect. How to deal with "offline King", to keep some sort of reality in game, without ruining everything for player. I have never liked unrealistic stuff in games, like appearing/dissapearing from thin air, like in WoW or other MMO RPG's, when it comes to connect/disconnect, like in Matrix movie, when you are disconnected, your "avatar" inside the system just vaninshes. Ok, if player char stays where it was left, other players who are online could kick his ass like offline player char was frozen? I see only one solution - give player char, while offline, some sort of AI, that controls char. Is there any other solutions? May be, some sort of legend/story, could make users only as inner-voice, leaving King just passively controlled by user, or other stuff... Please, help! I hope you understand my question.

    Read the article

  • Cocos2d v2.0 and OpenGL 2.0/1.0: where to start

    - by mm24
    I started developing my very first game 3 months ago using Cocos2d 2.0 for iPhone. I am now in the stage where I'd like to add some cool effects to the bullets and some special weapons (see my waveforms question here). I got a good answer in the cocos2d-iphone forum (see this one). Unfortunately I am a bit paralized now. I don't know if I will be overdoing by learning OpengGL 2.0 or if I should just stick ot the old 1.0. There is a good intro on various tutorial's written in Steffen Itterheims blog (see this post). I would like to add to my game: a blur effect to the bullets (here is a tutorial for OpenGL 1.0) a waveform (see above) some realistic water ripples (here is a nice sample code) So now, given that I don't want to overdo things but at the same time I want to achieve those effects, from where should I start? Should I discard the OpenGL 1.0 tutorials? OR should I use only OpenGL 1.0 code? How can I avoid confusion? I mean, it seems that the compiler recognizes both, but that there are some conflictual calls in some circumnstances, I am fairly sure this has some explanation, is there some reference to this somewhere?

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • Libgdx 2D Game, Random generated World of random size, how to get mouse coordinates?

    - by Solom
    I'm a noob and English is not my mothertongue, so please bear with me! I'm generating a map for a Sidescroller out of a 2D-array. That is, the array holds different values and I create blocks based on that value. Now, my problem is to match mouse coordinates on screen with the actual block the mouse is pointing at. public class GameScreen implements Screen { private static final int WIDTH = 100; private static final int HEIGHT = 70; private OrthographicCamera camera; private Rectangle glViewport; private Spritebatch spriteBatch; private Map map; private Block block; ... @Override public void show() { camera = new OrthographicCamera(WIDTH, HEIGHT); camera.position.set(WIDTH/2, HEIGHT/2, 0); glViewport = new Rectangle(0, 0, WIDTH, HEIGHT); map = new Map(16384, 256); map.printTileMap(); // Debugging only spriteBatch = new SpriteBatch(); } @Override public void render(float delta) { // Clear previous frame Gdx.gl.glClearColor(1, 1, 1, 1 ); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); GL30 gl = Gdx.graphics.getGL30(); // gl.glViewport((int) glViewport.x, (int) glViewport.y, (int) glViewport.width, (int) glViewport.height); spriteBatch.setProjectionMatrix(camera.combined); camera.update(); spriteBatch.begin(); // Draw Map this.drawMap(); // spriteBatch.flush(); spriteBatch.end(); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { // Bounds check (y) if(camera.position.y + camera.viewportHeight < a)// || camera.position.y - camera.viewportHeight > a) break; for(int b = 0; b < map.getWidth(); b++) { // Bounds check (x) if(camera.position.x + camera.viewportWidth < b)// || camera.position.x > b) break; // Dynamic rendering via BlockManager int id = map.getTileMap()[a][b]; Block block = BlockManager.map.get(id); if(block != null) // Check if Air { block.setPosition(b, a); spriteBatch.draw(block.getTexture(), b, a, 1 ,1); } } } } As you can see, I don't use the viewport anywhere. Not sure if I need it somewhere down the road. So, the map is 16384 blocks wide. One block is 16 pixels in size. One of my naive approaches was this: if(Gdx.input.isButtonPressed(Input.Buttons.LEFT)) { Vector3 mousePos = new Vector3(); mousePos.set(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); System.out.println(Math.round(mousePos.x)); // *16); // Debugging // TODO: round // map.getTileMap()[mousePos.x][mousePos.y] = 2; // Draw at mouse position } I confused myself somewhere down the road I fear. What I want to do is, update the "block" (or rather the information in the Map/2D-Array) so that in the next render() there is another block. Basically drawing on the spriteBatch g So if anyone could point me in the right direction this would be highly appreciated. Thanks!

    Read the article

  • I am looking to create realistic car movement using vectors

    - by bobthemac
    I have goggled how to do this and found this http://www.helixsoft.nl/articles/circle/sincos.htm I have had a go at it but most of the functions that were showed didn't work I just got errors because they didn't exist. I have looked at the cos and sin functions but don't understand how to use them or how to get the car movement working correctly using vectors. I have no code because I am not sure what to do sorry. Any help is appreciated. EDIT: I have restrictions that I must use the TL engine for my game, I am not allowed to add any sort of physics engine. It must be programmed in c++. Here is a sample of what i got from trying to follow what was done in the link I provided. if(myEngine->KeyHeld(Key_W)) { length += carSpeedIncrement; } if(myEngine->KeyHeld(Key_S)) { length -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_A)) { angle -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_D)) { angle += carSpeedIncrement; } carVolocityX = cos(angle); carVolocityZ = cos(angle); car->MoveX(carVolocityX * frameTime); car->MoveZ(carVolocityZ * frameTime);

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • The underlying mechanism in 'yield return www' of Unity3D Game Engine

    - by thyandrecardoso
    In the Unity3D game engine, a common code sequence for getting remote data is this: WWW www = new WWW("http://remote.com/data/location/with/texture.png"); yield return www; What is the underlying mechanism here? I know we use the yield mechanism in order to allow the next frame to be processed, while the download is being completed. But what is going on under the hood when we do the yield return www ? What method is being called (if any, on the WWW class)? Is Unity using threads? Is the "upper" Unity layer getting hold of www instance and doing something?

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Sun & Moon Movement

    - by Thomas Mosey
    I'm creating a 2D HTML5 Canvas Game and am stuck on how to go about animating my Sun & Moon. The current setup is basically setting the moon at -1024 on the X-axis and the sun at 0 and animating them at 1 pixel a second. My canvas width is 1024 pixels and whenever the sun/moons X position crosses over the width of the canvas, it's X position is then set to -1024 to repeat the animation. What I am trying to do is get it to sync up with my day/night cycles. Each day is 10000 ticks long (A tick being added every frame) with Day/Night being 50% each (5000 ticks each). What I am trying to calculate is what I'll need to add to the X position of each per frame to get the sun from an X of 0 to 1024 after 5000 ticks/frames. Any help is appreciated.

    Read the article

  • What is the difference between Constant Vertex Attributes and Uniforms?

    - by Samaursa
    According to the OpenGL ES 2.0 Programming Guide: A constant vertex attribute is the same for all vertices of a primitive, and therefore only one value needs to be specified for all the vertices of a primitive. For uniforms the book states: ...any parameter to a shader that is constant across either all vertices or fragments (but that is not known at compile time) should be passed in as a uniform. I've always used uniforms for data that is constant for a primitive but now it appears that attributes can also be used in the same way. Is there more to constant vertex attribute than simply 'they are the same as uniforms'?

    Read the article

  • What is a good practice for 2D scene graph partitioning for culling?

    - by DevilWithin
    I need to know an efficient way to cull the scene graph objects, to render exclusively the ones in the view, and as fast as possible. I am thinking of doing it the following way, having in each object a local boundingbox which holds the object bounds, and a global boundingbox which holds the bounds of the object and all children. When a camera is moved, the render list is updated by traversing the global boundingboxes. When only the object is being moved, it tries to enlarge or shrink the ancestors global boundingboxes, and in the end updating or not the renderlist. What do you think of this approach? Do you think it will provide a fast and efficient culling? Also, because the render list is a contiguous list, it could accelerate the rendering, right? Any further tips for a 2D scene graphs are highly appreciated!

    Read the article

  • i am going to start learning to develop games, and have a very importent question

    - by levi s.
    so i am going to be starting to start learning to develop games soon, and i have already learned the basics of java. before i really go balls out. am i making a bad choice of language? should i stop now and move to c++ or c#? will that hinder me? is java going to hinder me worse? im kinda having regrets on saying "oh hey minecraft was made in java, it must be best!" im mainly asking, what should i do?

    Read the article

  • Render angles of a 3D model into 2D images?

    - by Ricket
    Is there a tool out there that you can give a 3D model file, and it will output 2D renders of it from various angles? For example if you were making a 2D RPG but you want to make your character look nice, you might make the character in 3D and then just render the character from 8 or more angles into images which then are used by the 2D engine to give a pseudo-3D look. Does such a tool exist or will it need to be custom-written or done manually?

    Read the article

  • Implementing a switch statement based on user input

    - by Dave Voyles
    I'm trying to delay the time it takes for the main menu screen to pop up after a user has won / lost a match. As it stands, the game immediately displays a message stating "you won / lost" and waits for 6 seconds before loading the menu screen. I would also like players to have the ability to press a key to advance to the menu screen immediately but thus far my switch statement doesn't seem to do the trick. I've included the switch statement, along with my (theoretical) inputs. What could I be doing wrong here? if (gamestate == GameStates.End) switch (input.IsMenuDown(ControllingPlayer)) { case true: ScreenManager.AddScreen(new MainMenuScreen(), null); // Draws the MainMenuScreen break; case false: if (screenLoadDelay > 0) { screenLoadDelay -= gameTime.ElapsedGameTime.TotalSeconds; } ScreenManager.AddScreen(new MainMenuScreen(), null); // Draws the MainMenuScreen break; } /// <summary> /// Checks for a "menu down" input action. /// The controllingPlayer parameter specifies which player to read /// input for. If this is null, it will accept input from any player. /// </summary> public bool IsMenuDown(PlayerIndex? controllingPlayer) { PlayerIndex playerIndex; return IsNewKeyPress(Keys.Down, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.DPadDown, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.LeftThumbstickDown, controllingPlayer, out playerIndex); }

    Read the article

  • SFML title bar with weird characters when using UTF-8

    - by TheOm3ga
    (Previously asked at http://stackoverflow.com/questions/4922478/sfml-title-bar-with-weird-characters-when-using-utf-8) I've just started using SFML and one of the first problems I've come across is some weird characters on the the titlebar whenever I try to use accents or any other extended char. For instance, I've got: sf::RenderWindow Ventana(sf::VideoMode(800, 600, 32), "Año nuevóóó"); And the titlebar renders like AÂ+o nuevoA³A³A³ This ONLY HAPPENS if my source code file is enconded in UTF-8. If I change the file encoding to ISO-8859-1, it shows properly. Obviously all of my files use UTF-8, as its the system-wide encoding. I'm using GCC under Ubuntu GNU/Linux. I've tried using the different utilities in sf::Unicode to adapt the text, but none of them seems to work.

    Read the article

  • object detection in bitmmap javacanvas

    - by user1538127
    i want to detect clicks on canvas elements which are drawn using paths. so far i have think of to store elements path in javascript data structure and then check the cordinates of hits which matches the elements cordinates. i belive there is algorithm already for thins kind o cordinate search. rendering each of element path and checking the hits would be inefficient when elements number is larger. can anyone point on me that?

    Read the article

  • Player sprite moving slower on iPhone 4

    - by nvillec
    I just finished getting movement/jump animation for a player sprite in Xcode using Cocos2D. The basic movement algorithm is a timer that updates every 0.01 sec, changing the sprite position to (sprite.position.x + xVel, sprite.position.y + yVel). Each time a movement button is tapped, the appropriate velocity (initialized to 0) is changed to whatever speed I choose, then a stop movement button returns the velocity to 0. It's not an ideal solution but I'm very new at this and stoked to at least have that working with little help from the internet. So I may not have explained that perfectly, but it is in fact working to my satisfaction in Xcode's iPhone Simulator, however when I build it for my device and run it on my phone, the sprite's movement speed is noticeably slower than in Xcode. At first I thought it must have to do with the resolution of the iPhone 4, making the sprite's movement path twice as long, but I found that if I pull up the multitask bar, then return to the app the speed will sometimes jump back to normal. My second theory was that the code is just inefficient and is bogging the processes down, but I would see this reflected in the frame rate wouldn't I? It stays at 59-60 the whole time, and the spritesheet animation runs at the correct speed. Has anyone experienced this? Is this a really obvious issue that I'm completely missing? Any help (or tips for optimizing my approach to movement) would be much appreciated!

    Read the article

  • Cocos2d update leaking memory

    - by Andrey Chernukha
    I have a weird issue - my app is leaking memory on device only, not on a simulator. It is leaking if i schedule update method anywhere, on any scene. It is leaking despite update method is empty, there's nothing inside it except NSLog. How can it be? I have even scheduled update on the very first scene where it seems there's nothing to leak, and scheduled another empty and it's leaking or not leaking but allocating something, the result is the same - the volume of the memory consumed is increasing and my app is crashing soon. I can detect the leakage via using Instruments-Memory-Activity Monitor or with help of following function: void report_memory(void) { struct task_basic_info info; mach_msg_type_number_t size = sizeof(info); kern_return_t kerr = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size); if( kerr == KERN_SUCCESS ) { NSLog(@"Memory in use (in bytes): %u", info.resident_size); } else { NSLog(@"Error with task_info(): %s", mach_error_string(kerr)); } } Can anyone explain me what's going on?

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >