Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 567/1274 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • Platform game collisions with Block

    - by Sri Harsha Chilakapati
    I am trying to create a platform game and doing wrong collision detection with the blocks. Here's my code // Variables GTimer jump = new GTimer(1000); boolean onground = true; // The update method public void update(long elapsedTime){ MapView.follow(this); // Add the gravity if (!onground && !jump.active){ setVelocityY(4); } // Jumping if (isPressed(VK_SPACE) && onground){ jump.start(); setVelocityY(-4); onground = false; } if (jump.action(elapsedTime)){ // jump expired jump.stop(); } // Horizontal movement setVelocityX(0); if (isPressed(VK_LEFT)){ setVelocityX(-4); } if (isPressed(VK_RIGHT)){ setVelocityX(4); } } // The collision method public void collision(GObject other){ if (other instanceof Block){ // Determine the horizontal distance between centers float h_dist = Math.abs((other.getX() + other.getWidth()/2) - (getX() + getWidth()/2)); // Now the vertical distance float v_dist = Math.abs((other.getY() + other.getHeight()/2) - (getY() + getHeight()/2)); // If h_dist > v_dist horizontal collision else vertical collision if (h_dist > v_dist){ // Are we moving right? if (getX()<other.getX()){ setX(other.getX()-getWidth()); } // Are we moving left? else if (getX()>other.getX()){ setX(other.getX()+other.getWidth()); } } else { // Are we moving up? if (jump.active){ jump.stop(); } // We are moving down else { setY(other.getY()-getHeight()); setVelocityY(0); onground = true; } } } } The problem is that the object jumps well but does not fall when moved out of platform. Here's an image describing the problem. I know I'm not checking underneath the object but I don't know how. The map is a list of objects and should I have to iterate over all the objects??? Thanks

    Read the article

  • Outline Shader Effect for Orthogonal Geometry in XNA

    - by Griffin
    I just recently started learning the art of shading, but I can't give an outline width to 2D, concave geometry when restrained to a single vertex/pixel shader technique (thanks to XNA). the shape I need to give an outline to has smooth, per-vertex coloring, as well as opacity. The outline, which has smooth, per-vertex coloring, variable width, and opacity cannot interfere with the original shape's colors. A pixel depth border detection algorithm won't work because pixel depth isn't a 3.0 semantic. expanding geometry / redrawing won't work because it interferes with the original shape's colors. I'm wondering if I can do something with the stencil/depth buffer outside of the shader functions since I have access to that through the graphics device. But I don't believe I'm able to manipulate actual values. How might I do this?

    Read the article

  • Making body(box2d) a spite(andengine) in android

    - by Kadir
    I can't make body(box2d) a spite(andengine) and at the same time i wanna apply MoveModifier to sprite which is body.if i can make just body,it works namely the srites can collide.if i can apply just MoveModifier to sprites,the sprites can move where i want.but i wanna make body(they can collide) and apply MoveModifier(they can move where i want) at the same time.How can i do? this my code just run MoveModifier not as body at the same time. circles[i] = new Sprite(startX, startY, textRegCircle[i]); body[i] = PhysicsFactory.createCircleBody(physicsWorld, circles[i], BodyType.DynamicBody, FIXTURE_DEF); physicsWorld.registerPhysicsConnector(new PhysicsConnector(circles[i], body[i], true, true)); circles[i].registerEntityModifier( (IEntityModifier) new SequenceEntityModifier ( new MoveModifier(10.0f, circles[i].getX(), circles[i].getX(), circles[i].getY(),CAMERA_HEIGHT+64.0f))); scene.getLastChild().attachChild(circles[i]); scene.registerUpdateHandler(physicsWorld);

    Read the article

  • 3D architecture app for Android or iPhone

    - by Manixate
    I want to make an app for 3D modeling on iPhone/Android. I cannot get the basic idea of how to get started. I have various options such as learning OpenGL ES, UDK or Unity3d but I want to create models(e.g architecture etc) in my app and then render them when user is finished modeling. I do not know if I am able to design models and then render them in the same app with various effects on the iPhone/Android using UDK or Unity3d. (Note: If you find this question unclear please ask, I may have skipped some vital information).

    Read the article

  • How do I prevent my platformer's character from clipping on wall tiles?

    - by Jonathan Hobbs
    Currently, I have a platformer with tiles for terrain (graphics borrowed from Cave Story). The game is written from scratch using XNA, so I'm not using an existing engine or physics engine. The tile collisions are described pretty much exactly as described in this answer (with simple SAT for rectangles and circles), and everything works fine. Except when the player runs into a wall whilst falling/jumping. In that case, they'll catch on a tile and begin thinking they've hit a floor or ceiling that isn't actually there. The player is moving right and falling downwards. So after movement, collisions are checked - and first, it turns out the player character is colliding with the tile 3rd from the floor, and pushed upwards. Second, he's found to be colliding with the tile beside him, and pushed sideways - the end result being the player character thinks he's on the ground and isn't falling, and 'catches' on the tile for as long as he's running into it. I could solve this by defining the tiles from top to bottom instead, which makes him fall smoothly, but then the inverse case happens and he'll hit a ceiling that isn't there when jumping upwards against the wall. How should I approach resolving this, so that the player character can just fall along the wall as it should?

    Read the article

  • How to get a point to the left/right of a vector

    - by MulletDevil
    I have a position vector of a point in space and a quaternion for it's rotation. What i'm trying to calculate is a point too the left and a point to the right. I have the position and rotation(quaternion) of the red dot. What I want is to get the position of the green dots. I have a float value for the distance I want these points to be. With only the position and rotation is it possible to get a unit direction vector pointing left/right which I can multiply by my float value? Edit: I also know the original direction vector.

    Read the article

  • How do I implement Unreal-like object serialization?

    - by MrWiggels
    Recently, I've been working on the core of my engine, and as I'm moving forward I find myself developing throwaway code to read files and simple data into the engine. This got me thinking about how I should implement a file management system. After a bit of googleing I came across the Unreal Package format, and boy does it look like the perfect one. I think it's good because the way how it allows you to separate different assets into different packages and allow something like a level to reference the different packages. I was just wondering, is this possible with C#? Because the built-in serialization API in .NET does not seem to support any form of this, only reading and writing to a single file.

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • Show path of a body of where it should go after linear impulse is applied

    - by Farooq Arshed
    I am making a game with Andengine and Box2D. I have a dynamic body and I apply linear impulse on the body to move it around when the user have touched the screen. Now I want to show the path where the body will go when the user have touched. If you have played Angry Birds or Basket Ball Shoot or any other which have projectile motion with a path shown you will get my point. I want to show the white dots which are shown in those games.

    Read the article

  • Music for Kids Game!

    - by Dane
    I'm developing a Multimedia Software for Kindergarten Kids. It introduce them to animals, Alphabets, Simple Math, Colors and it contain some simple games. Music is very crucial for my project and it is very important to choose the right sort of music for different sections. But unfortunately I know nothing about music. Is there a music consultant firm which can help me to choose melodies and rythmes for my project from free music available in internet. My Budget is limited but as this is mandatory and I have no knowledge or taste about music, I think I can afford to pay for this.

    Read the article

  • Does XNA/MonoGame have a text caching mechanism, or has an open source one been implemented?

    - by Casey
    I'm playing around with MonoGame, and I've noticed the SpriteFont class draws static text very inefficiently. Each time the text is drawn the spacing is recalculated. This isn't a big deal on my quad core PC, but on mobile applications it might be a problem. Before I go and program some text which caches the arrangement of its letters in an array and then feeds that array to the SpriteBatch, I would like to make sure there isn't something available to do this already, either in MonoGame itself or a class someone has implemented and made available for general use.

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Square game map rendered as sphere with OpenGL

    - by Roflha
    Okay so I have been trying to find a good way to do this for a while now and so far I have nothing. For a hobby project of mine I have created a finite voxel world (similar to minecraft), but as I said, mine is finite. When you reach the edge of it, you are sent to the other side. That is all working fine along with rendering the far side of the map, but I want to be able to render this grid as a sphere. Looking down from above, the world is a square. I basically want to be able to represent a portion of that square as a sphere, as if you were looking at a planet. Right now I am experimenting with taking a circular section of the map, and rendering that, but it look to flat (no curvature around the edges). My question then, is what would be the best way to add some curvature to the edges of a 2d circle to make it look like a hemisphere. However, I am not overly attached to this implementation so if somebody has some other idea for representing the square as a planet, I am all ears.

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • How are components properly instantiated and used in XNA 4.0?

    - by Christopher Horenstein
    I am creating a simple input component to hold on to actions and key states, and a short history of the last ten or so states. The idea is that everything that is interested in input will ask this component for the latest input, and I am wondering where I should create it. I am also wondering how I should create components that are specific to my game objects - I envision them as variables, but then how do their Update/Draw methods get called? What I'm trying to ask is, what are the best practices for adding components to the proper collections? Right now I've added my input component in the main Game class that XNA creates when everything is first initialized, saying something along the lines of this.Components.Add(new InputComponent(this)), which looks a little odd to me, and I'd still want to hang onto that reference so I can ask it things. An answer to my input component's dilemma is great, but also I'm guessing there is a right way to do this in general in XNA.

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Map format for 3d open world

    - by Pacha
    I am making an open world 3d platformer in Ogre3D, and I have no idea on what kind of 3d map file format I should use for it. I want to make low-polygon blocky-style objects. Probably rectangles and other geometrical figures that don't have circular edges. Some of those blocks will have properties, like climbable or they might move. I was wondering what would be the best thing to do to make the map (just one level, as it is open).

    Read the article

  • Architecture of an action multiplayer game from scratch

    - by lcf
    Not sure whether it's a good place to ask (do point me to a better one if it's not), but since what we're developing is a game - here it goes. So this is a "real-time" action multiplayer game. I have familiarized myself with concepts like lag compensation, view interpolation, input prediction and pretty much everything that I need for this. I have also prepared a set of prototypes to confirm that I understood everything correctly. My question is about the situation when game engine must be rewind to the past to find out whether there was a "hit" (sometimes it may involve the whole 'recomputation' of the world from that moment in the past up to the present moment. I already have a piece of code that does it, but it's not as neat as I need it to be. The domain logic of the app (the physics of the game) must be separated from the presentation (render) and infrastructure tools (e.g. the remote server interaction specifics). How do I organize all this? :) Is there any worthy implementation with open sources I can take a look at? What I'm thinking is something like this: -> Render / User Input -> Game Engine (this is the so called service layer) -> Processing User Commands & Remote Server -> Domain (Physics) How would you add into this scheme the concept of "ticks" or "interactions" with the possibility to rewind and recalculate "the game"? Remember, I cannot change the Domain/Physics but only the Game Engine. Should I store an array of "World's States"? Should they be just some representations of the world, optimized for this purpose somehow (how?) or should they be actual instances of the world (i.e. including behavior and all that). Has anybody had similar experience? (never worked on a game before if that matters)

    Read the article

  • Android how to get opengl 3D coordinates in ontouch event

    - by Sandy
    I created a cube in opengl and it rotates in ontouch event. To to this I created a CustomSurfaceView as follows public class CustomSurfaceView extends GLSurfaceView { @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX() float y = e.getY(); } } Here x and y are screen coordinates. How can I get 3D coordinated from this? I have already looked gluProject and NeHe. But I dont know how to implement this in my project, it shows that there is no GLdouble,GLfloat class.

    Read the article

  • Class Design - Space Simulator

    - by Peteyslatts
    I have pretty much taught myself everything I know about programming, so while I know how to teach myself (books, internet and reading API's), I'm finding that there hasn't been a whole lot in the way of good programming. So I have two questions: First the broad one: Does anyone have suggestions as to sources for learning about good programming habits and techniques? I'd prefer it if the resource wasn't a 5000 page tome. The more I can read it in installments the better. More specifically: I am finishing up learning the basics of XNA and I want to create a space simulator to test my knowledge. This isn't a full scale simulator, but just something that covers everything I learned. It's also going to be modular so I can build on it, after I get the basics down. One of the early features I want to implement is AI. And I want to take this into account as I'm designing my classes so I can minimize rewriting code. So my question: How should I design ship classes so that both the player and AI can use them? The only idea I have so far is: Create a ship class that contains stats, models, textures, collision data etc. The player and AI would then have the data for position, rotation, health, etc and would base their status off of the ship stats.

    Read the article

  • How do I go from a simple html5 tic tac toe game to an online 2 player game?

    - by phi1o
    I've been working on an online 2 player Tic Tac Toe solution for blackberries. both old and new. And so far I have html5 code that has a 3 x 3 layout that switches between x and o for the game mechanics. I believe I'm still missing a check for win function but my question is about the server side of this game. I'm not sure how to go about learning what exactly I want. how do you take what I have now, and make this into a functioning online game? I've been told WAMP is a good solution, as well as IIS. and its all really over my head, so i'm hoping to get a little more clarity as far as what I should focus on to bring this game to life.

    Read the article

  • Drawing texture does not work anymore with a small amount of triangles

    - by Paul
    When i draw lines, the vertices are well connected. But when i draw the texture inside the triangles, it only works with i<4 in the for loop, otherwise with i<5 for example, there is a EXC_BAD_ACCESS message, at @synthesize textureImage = _textureImage. I don't understand why. (The generatePolygons method seems to work fine as i tried to draw lines with many vertices as in the second image below. And textureImage remains the same for i<4 or i<5 : it's a 512px square image). Here are the images : What i want to achieve is to put the red points and connect them to the y-axis (the green points) and color the area (the green triangles) : If i only draw lines, it works fine : Then with a texture color, it works for i<4 in the loop (the red points in my first image, plus the fifth one to connect the last y) : But then, if i set i<5, the debug tool says EXC_BAD_ACCESS at the synthesize of _textureImage. Here is my code : I set a texture color in HelloWordLayer.mm with : CCSprite *textureImage = [self spriteWithColor:color3 textureSize:512]; _terrain.textureImage = textureImage; Then in the class Terrain, i create the vertices and put the texture in the draw method : @implementation Terrain @synthesize textureImage = _textureImage; //EXC_BAD_ACCESS for i<5 - (void)generatePath2{ CGSize winSize = [CCDirector sharedDirector].winSize; float x = 40; float y = 0; for(int i = 0; i < kMaxKeyPoints+1; ++i) { _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += 30; } } -(void)generatePolygons{ _nPolyVertices = 0; float x1 = 0; float y1 = 0; int keyPoints = 0; for (int i=0; i<4; i++){ /* HERE : 4 = OK / 5 = crash */ //V0: at (0,0) _polyVertices[_nPolyVertices] = CGPointMake(x1, y1); _polyTexCoords[_nPolyVertices++] = CGPointMake(x1, y1); //V1: to the first "point" _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); keyPoints++; //from point at index 0 to 1 //V2, same y as point n°2: _polyVertices[_nPolyVertices] = CGPointMake(0, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(0, _hillKeyPoints[keyPoints].y); //V1 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V2 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V3 = same x,y as point at index 1 _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); y1 = _polyVertices[_nPolyVertices].y; _nPolyVertices++; } } - (id)init { if ((self = [super init])) { [self generatePath2]; [self generatePolygons]; } return self; } - (void) draw { //glDisable(GL_TEXTURE_2D); glDisableClientState(GL_COLOR_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glBindTexture(GL_TEXTURE_2D, _textureImage.texture.name); glColor4f(1, 1, 1, 1); glVertexPointer(2, GL_FLOAT, 0, _polyVertices); glTexCoordPointer(2, GL_FLOAT, 0, _polyTexCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nPolyVertices); glColor4f(1, 1, 1, 1); for(int i = 1; i < 40; ++i) { ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } // restore default GL states glEnable(GL_TEXTURE_2D); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); } Do you see anything wrong in this code? Thanks for your help

    Read the article

  • Image 1 becomes image 2 with sliding effect from left to right?

    - by Paul
    I would like to show a second image appearing while a "door" is closing on my character. I've got my character in the middle of the screen and a door coming from the left. When the door passes my character, I would like to have this second image appearing little by little. So far, I've gotten by with fadingOut the character and then fadingIn my second image of the character at the same position when the door is completely closed, but I would like to have both of them at the same time. (the effect that image 1 becomes image 2 when the door is sliding from left to right). Would you know how to do this with Cocos2d? Here are the images : at first, the character is blue, and the door is coming from the left : Then, behind the black door, the character becomes red, but only behind this door, so it stays blue when the door is not on him, and will become completely red when the door passes the character : EDIT : with this code, the black door hides the red and blue rectangles : (And if i add each of my layers at a different depth, and only use GL_LESS, same thing) blue.position = ccp( size.width*0.5 , size.height/2 ); red.position = ccp( size.width*0.46 , size.height/2 ); black.position = ccp( size.width*0.1 , size.height/2 ); glEnable(GL_DEPTH_TEST); [batch addChild:red z:0]; [batch addChild:black z:2]; glDepthFunc(GL_GREATER); [batch addChild:blue z:1]; glDepthFunc(GL_LESS); id action1 = [CCMoveTo actionWithDuration:3 position:ccp(size.width,size.height/2)]; [black runAction: [CCSequence actions:action1, nil]];

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >