Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 369/774 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • Player sprite moving slower on iPhone 4

    - by nvillec
    I just finished getting movement/jump animation for a player sprite in Xcode using Cocos2D. The basic movement algorithm is a timer that updates every 0.01 sec, changing the sprite position to (sprite.position.x + xVel, sprite.position.y + yVel). Each time a movement button is tapped, the appropriate velocity (initialized to 0) is changed to whatever speed I choose, then a stop movement button returns the velocity to 0. It's not an ideal solution but I'm very new at this and stoked to at least have that working with little help from the internet. So I may not have explained that perfectly, but it is in fact working to my satisfaction in Xcode's iPhone Simulator, however when I build it for my device and run it on my phone, the sprite's movement speed is noticeably slower than in Xcode. At first I thought it must have to do with the resolution of the iPhone 4, making the sprite's movement path twice as long, but I found that if I pull up the multitask bar, then return to the app the speed will sometimes jump back to normal. My second theory was that the code is just inefficient and is bogging the processes down, but I would see this reflected in the frame rate wouldn't I? It stays at 59-60 the whole time, and the spritesheet animation runs at the correct speed. Has anyone experienced this? Is this a really obvious issue that I'm completely missing? Any help (or tips for optimizing my approach to movement) would be much appreciated!

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • glsl demo suggestions ?

    - by brainydexter
    In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished". So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders. I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention. Thanks

    Read the article

  • Debugging Minimum Translation Vector

    - by SyntheCypher
    I implemented the minimum translation vector from codezealot's tutorial on SAT (Separating Axis Theorem) but I'm having an issue I can't quite figure out. Here's the example I have: As you can see in top and bottom left images regardless of the side the of the green car which red car is penetrating the MTV for the red car still remains as a negative number also here is the same example when the front of the red car is facing the opposite direction the number will always be positive. When the red car is past the half way through the green car it should switch polarity. I thought I'd compensated for this in my code, but apparently not either that or it's a bug I can find. Here is my function for finding and returning the MTV, any help would be much appreciated: Code

    Read the article

  • How can I get accurate collision resolution on the corners of rectangles?

    - by ssb
    I have a working collision system implemented, and it's based on minimum translation vectors. This works fine in most cases except when the minimum translation vector is not actually in the direction of the collision. For example: When a rectangle is on the far edge on any side of another rectangle, a force can be applied, in this example down, the pushes one rectangle into the other, particularly a static object like a wall or a floor. As in the picture, the collision is coming from above, but because it's on the very edge, it translates to the left instead of back up. I've searched for a while to find an approach but everything I can find deals with general corner collisions where my problem is only in this one limited case. Any suggestions?

    Read the article

  • Jittery Movement, Uncontrollably Rotating + Front of Sprite?

    - by Vipar
    So I've been looking around to try and figure out how I make my sprite face my mouse. So far the sprite moves to where my mouse is by some vector math. Now I'd like it to rotate and face the mouse as it moves. From what I've found this calculation seems to be what keeps reappearing: Sprite Rotation = Atan2(Direction Vectors Y Position, Direction Vectors X Position) I express it like so: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X); If I just go with the above, the sprite seems to jitter left and right ever so slightly but never rotate out of that position. Seeing as Atan2 returns the rotation in radians I found another piece of calculation to add to the above which turns it into degrees: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X) * 180 / PI; Now the sprite rotates. Problem is that it spins uncontrollably the closer it comes to the mouse. One of the problems with the above calculation is that it assumes that +y goes up rather than down on the screen. As I recorded in these two videos, the first part is the slightly jittery movement (A lot more visible when not recording) and then with the added rotation: Jittery Movement So my questions are: How do I fix that weird Jittery movement when the sprite stands still? Some have suggested to make some kind of "snap" where I set the position of the sprite directly to the mouse position when it's really close. But no matter what I do the snapping is noticeable. How do I make the sprite stop spinning uncontrollably? Is it possible to simply define the front of the sprite and use that to make it "face" the right way?

    Read the article

  • Explaining Asteroids Movement code

    - by Moaz ELdeen
    I'm writing an Asteroids Atari clone, and I want to figure out how the AI for the asteroids is done. I have came across that piece of code, but I can't get what it does 100% if ((float)rand()/(float)RAND_MAX < 0.5) { m_Pos.x = -app::getWindowWidth() / 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Pos.x = app::getWindowWidth() / 2; m_Pos.y = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); } else { m_Pos.x = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); m_Pos.y = -app::getWindowHeight() / 2; if (rand() < 0.5) m_Pos.y = app::getWindowHeight() / 2; } m_Vel.x = (float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) { m_Vel.x = -m_Vel.x; } m_Vel.y =(float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Vel.y = -m_Vel.y;

    Read the article

  • Directional and orientation problem

    - by Ahmed Saleh
    I have drawn 5 tentacles which are shown in red. I have drew those tentacles on a 2D Circle, and positioned them on 5 vertices of the that circle. BTW, The circle is never be drawn, I have used it to simplify the problem. Now I wanted to attached that circle with tentacles underneath the jellyfish. There is a problem with the current code but I don't know what is it. You can see that the circle is parallel to the base of the jelly fish. I want it to be shifted so that it be inside the jelly fish. but I don't know how. I tried to multiply the direction vector to extend it but that didn't work. // One tentacle is constructed from nodes // Get the direction of the first tentacle's node 0 to node 39 of that tentacle; Vec3f dir = m_tentacle[0]->geNodesPos()[0] - m_tentacle[0]->geNodesPos()[39]; // Draw the circle with tentacles on it Vec3f pos = m_SpherePos; drawCircle(pos,dir,30,m_tentacle.size()); for (int i=0; i<m_tentacle.size(); i++) { m_tentacle[i]->Draw(); } // Draw the jelly fish, and orient it on the 2D Circle gl::pushMatrices(); Quatf q; // assign quaternion to rotate the jelly fish around the tentacles q.set(Vec3f(0,-1,0),Vec3f(dir.x,dir.y,dir.z)); // tanslate it to the position of the whole creature per every frame gl::translate(m_SpherePos.x,m_SpherePos.y,m_SpherePos.z); gl::rotate(q); // draw the jelly fish at center 0,0,0 drawHemiSphere(Vec3f(0,0,0),m_iRadius,90); gl::popMatrices();

    Read the article

  • Efficient way to calculate "vision cones" on 2D tile map?

    - by OverMachoGrande
    I'm trying to calculate which tiles a particular unit can "see" if facing a certain direction on a tile map (within a certain range and angle of facing). The easiest way would be to draw a certain number of tiles outward and raycast to each tile. However, I'm hoping for something slightly more efficient. A picture says a thousand words: The red dot is the unit (who's facing upwards). My goal is to calculate the yellow tiles. The green blocks are walls (walls are between tiles, and it's easy to check if you can pass between two tiles). The blue line represents something like the "raycasting" method I was talking about, but I'd rather not have to do this. EDIT: Units can only be facing north/south/east/west (0, 90, 180, or 270 degrees) and FoV is always 90 degrees. Should simplify some calculations. I'm thinking there's some sort of recursive-ish/stack-based/queue-based algorithm, but I can't quite figure it out. Thanks!

    Read the article

  • Unity Problem with colliding instances of same object

    - by Kuba Sienkiewicz
    I want to check if object's instance is overlapping with another instance (any spawned object with another spawned object, not necessary the same object). I'm doing this by detecting collisions between bodies. But I have a problem. Spawned object (instances) are detecting collision with everything but other spawned objects. I've checked collision layers etc. All of spawned objects have rigidbodies and mesh colliders. Also when I attach my script to another body and I touch that body with an instanced object it detects collision. So problem is visible only in collision between spawned objects. And one more information I have script, rigid body and collider attached to child of main object. using UnityEngine; using System.Collections; public class CantPlace : MonoBehaviour { public bool collided = false; // Use this for initialization void Start () { } // Update is called once per frame void Update () { //Debug.Log (collided); } void OnTriggerEnter(Collider collider) { //if (true) { //foreach (Transform child in this.transform) { // if (child.name == "Cylinder") { //collided = true; Color c; c = this.renderer.material.color; c.g = 0f; c.b = 1f; c.r = 0f; this.renderer.material.color = c; Debug.Log (collider.name); //} // } //} //foreach (ContactPoint contact in collision.contacts) { // Debug.DrawRay(contact.point, contact.normal, Color.red,15f); // } } }

    Read the article

  • lwjgl custom icon

    - by melchor629
    I have a little problem with the icon in lwjgl, it doesn't work. I google about it, but i haven't found anything that works for me yet. This is my code for now: PNGDecoder imageDecoder = new PNGDecoder(new FileInputStream("res/images/Icon.png")); ByteBuffer imageData = BufferUtils.createByteBuffer(4 * imageDecoder.getWidth() * imageDecoder.getHeight()); imageDecoder.decode(imageData, imageDecoder.getWidth() * 4, PNGDecoder.Format.RGBA); imageData.flip(); System.err.println(Display.setIcon(new ByteBuffer[]{imageData}) == 0 ? "No se ha creado el icono" : "Se ha creado el icono"); The png file is a 128x128px with transparency. PNGDecoder is from the matthiasmann utility (de.matthiasmann.twl.utils). I'm using Mac OS, 10.8.4 with lwjgl 2.9.0. Thanks :)

    Read the article

  • How do I get my polygons to be lighted by either side?

    - by Molmasepic
    Okay, I am using Ogre3D and Gorilla(2D library for ogre3D) and I am making Gorilla::Screenrenderables in the open scene. The problem that I am having is that when I make a light and have my SR(screenrenderable) near it, it does not light up unless the face of the SR is facing the light... I am wondering if there is a way to maybe set the material or code(which would be harder) so the SR is lit up whether the vertices of the polygon are facing the light or not. I feel it is possible but the main obstacle is how I would go about doing this.

    Read the article

  • [JOGL] my program is too slow, ho can i profile with Eclipse?

    - by nkint
    hi juys my simple opengl program is really toooo slow and not fluid i'm rendering 30 sphere with simple illumination and simple material. the only hard(?) computing stuffs i do is a collision detection between ray-mouse and spheres (that works ok and i do it only in mouseMoved) i have no thread only animator to move spheres how can i profile my jogl project? or mayebe (most probable..) i have some opengl instruction that i don't understand and make render particular accurate (or back face rendering that i don't need or whatever i don't know exctly i'm just entered in opengl world)

    Read the article

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • How to perform simple collision detection?

    - by Rob
    Imagine two squares sitting side by side, both level with the ground like so: A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if all of the following are false: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. But consider a case like this, where one square is at a 45 degree angle: Is there an equally simple way to determine if those squares are touching?

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • Make Interactive Story more Variable [on hold]

    - by Guest0343
    I'm creating an interactive story that allows users to make choices based on a story. However, it doesn't give users room to do much creatively on their own. They are bound by the script at the moment. I'm wondering if anyone can suggest any element I can add that might give users some personalization. I was thinking about maybe character editing, but that doesn't add too much. I also thought about a stats system where they can have certain attributes and stats they might earn, but I'm not sure how they might use those stats. Anything is helpful!

    Read the article

  • Bump mapping Problem GLSL

    - by jmfel1926
    I am having a slight problem with my Bump Mapping project. Although everything works OK (at least from what I know) there is a slight mistake somewhere and I get incorrect shading on the brick wall when the light goes to the one side or the other as seen in the picture below: The light is on the right side so the shading on the wall should be the other way. I have provided the shaders to help find the issue (I do not have much experience with shaders). Shaders: varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; attribute vec3 tangent; attribute vec3 binormal; uniform vec3 lightpos; uniform mat4 cameraMat; void main() { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); position = vec3(gl_ModelViewMatrix * gl_Vertex); lightvec = vec3(cameraMat * vec4(lightpos,1.0)) - position ; vec3 eyeVec = vec3(gl_ModelViewMatrix * gl_Vertex); viewVec = normalize(-eyeVec); } uniform sampler2D colormap; uniform sampler2D normalmap; varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; vec3 vv; uniform float diffuset; uniform float specularterm; uniform float ambientterm; void main() { vv=viewVec; vec3 normals = normalize(texture2D(normalmap,gl_TexCoord[0].st).rgb * 2.0 - 1.0); normals.y = -normals.y; //normals = (normals * gl_NormalMatrix).xyz ; vec3 distance = lightvec; float dist_number =length(distance); float final_dist_number = 2.0/pow(dist_number,diffuset); vec3 light_dir=normalize(lightvec); vec3 Halfvector = normalize(light_dir+vv); float angle=max(dot(Halfvector,normals),0.0); angle= pow(angle,specularterm); vec3 specular=vec3(angle,angle,angle); float diffuseterm=max(dot(light_dir,normals),0.0); vec3 diffuse = diffuseterm * texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 ambient = ambientterm *texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 diffusefinal = diffuse * final_dist_number; vec3 finalcolor=diffusefinal+specular+ambient; gl_FragColor = vec4(finalcolor, 1.0); }

    Read the article

  • How to attach two XNA models together?

    - by jeangil
    I go back on unsolved question I asked about attaching two models together, could you give me some help on this ? For example, If I want to attach together Model1 (= Main model) & Model2 ? I have to get the transformation matrix of Model1 and after get the Bone index on Model1 where I want to attach Model2 and then apply some transformation to attach Model2 to Model1 I wrote some code below about this, but It does not work at all !! (6th line of my code seems to be wrong !?) Model1TransfoMatrix=New Matrix[Model1.Bones.Count]; Index=Model1.bone[x].Index; foreach (ModelMesh mesh in Model2.Meshes) { foreach(BasicEffect effect in mesh.effects) { matrix model2Transform = Matrix.CreateScale(0.1.0f)*Matrix.CreateFromYawPitchRoll(x,y,z); effect.World= model2Transform *Model1TransfoMatrix[Index]; effect.view = camera.View; effect.Projection= camera.Projection; } mesh.draw(); }

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >