Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 518/1039 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Converting Degrees to X and Y Coordinate change

    - by gopgop
    I am using a float positioning system in my game. IE float x,y,z now I want to get the location of the mouse, then to fire an arrow to it. X0 = the players X location X1 = the mouse X location Y0 = the players Y location Y1 = the mouse Y location I want to make a method that the parameters are Degrees and it sets my Yspeed and my Xspeed accordingly to get to the mouse xy starting at player xy How do would I accomplish this?

    Read the article

  • Collision detection between a sprite and rectangle in canvas

    - by Andy
    I'm building a Javascript + canvas game which is essentially a platformer. I have the player all set up and he's running, jumping and falling, but I'm having trouble with the collision detection between the player and blocks (the blocks will essentially be the platforms that the player moves on). The blocks are stored in an array like this: var blockList = [[50, 400, 100, 100]]; And drawn to the canvas using this: this.draw = function() { c.fillRect(blockList[0][0], blockList[0][1], 100, 100); } I'm checking for collisions using something along these lines in the player object: this.update = function() { // Check for collitions with blocks for(var i = 0; i < blockList.length; i++) { if((player.xpos + 34) > blockList[i][0] && player.ypos > blockList[i][1]) { player.xpos = blockList[i][0] - 28; return false; } } // Other code to move the player based on keyboard input etc } The idea is if the player will collide with a block in the next game update (the game uses a main loop running at 60Htz), the function will return false and exit, thus meaning the player won't move. Unfortunately, that only works when the player hits the left side of the block, and I can't work out how to make it so the player stops if it hits any side of the block. I have the properties player.xpos and player.ypos to help here.

    Read the article

  • Marching squares: Finding multiple contours within one source field?

    - by TravisG
    Principally, this is a follow-up-question to a problem from a few weeks ago, even though this is about the algorithm in general without application to my actual problem. The algorithm basically searches through all lines in the picture, starting from the top left of it, until it finds a pixel that is a border. In pseudo-C++: int start = 0; for(int i=0; i<amount_of_pixels; ++i) { if(pixels[i] == border) { start = i; break; } } When it finds one, it starts the marching squares algorithm and finds the contour to whatever object the pixel belongs to. Let's say I have something like this: Where everything except the color white is a border. And have found the contour points of the first blob: For the general algorithm it's over. It found a contour and has done its job. How can I move on to the other two blobs to find their contours as well?

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Command Pattern refactor for input processing?

    - by Casey
    According to Game Coding Complete 4th. ed. processing input via the following is considered unmanagable and inflexible. But does not show an example. I've used the Command pattern to represent GUI button commands but could not figure out how to represent the input from the keyboard and/or mouse. if(g_keyboard->KeyDown(KEY_ESC)) { quit = true; return; } //Processing if(g_keyboard->KeyDown(KEY_T)) { g_show_test_gateway = !g_show_test_gateway; } if(g_mouse->ButtonDown(a2de::Mouse::BUTTON2)) { g_selected_part = GWPart::PART_NONE; SetMouseImageToPartImage(); } ResetButtonStates(); g_prevButton = g_curButton; g_curButton = GetButtonHovered(); if(g_curButton) { g_mouse->SetImageToDefault(); if(g_mouse->ButtonDown(a2de::Mouse::BUTTON1) || g_mouse->ButtonPress(a2de::Mouse::BUTTON1)) { ButtonPressCommand curCommand(g_curButton); curCommand.Execute(); } else if(g_mouse->ButtonUp(a2de::Mouse::BUTTON1)) { if(g_prevButton == g_curButton) { ButtonReleaseCommand curCommand(g_curButton); curCommand.Execute(); if(g_curButton->GetType() == "export") { ExportCommand curCommand(g_curButton, *g_gateway); curCommand.Execute(); } } else { ResetButtonStates(); } } else { ButtonHoverCommand curCommand(g_curButton); curCommand.Execute(); } } else { g_status_message.clear(); SetMouseImageToPartImage(); if(g_mouse->ButtonDown(a2de::Mouse::BUTTON1)) { CreatePartCommand curCommand(*g_gateway, g_selected_part, a2de::Vector2D(g_mouse->GetX(), g_mouse->GetY())); curCommand.Execute(); } }

    Read the article

  • A way to store potentially infinite 2D map data?

    - by Blam
    I have a 2D platformer that currently can handle chunks with 100 by 100 tiles, with the chunk coordinates are stored as longs, so this is the only limit of maps (maxlong*maxlong). All entity positions etc etc are chunk relevant and so there is no limit there. The problem I'm having is how to store and access these chunks without having thousands of files. Any ideas for a preferably quick & low HD cost archive format that doesn't need to open everything at once?

    Read the article

  • Understanding the Microsoft Permissive License

    - by cable729
    I want to use certain parts of the Game State Management Example in a game I'm making, but I'm not sure how to do this legally. It says in the license that I'm supposed to include a copy of the license with it. So if I make a Visual Studio Solution, I just add the license.txt to the solution? Also, if I use a class and change it, do I have to keep the license info at the top or add that I changed it or what?

    Read the article

  • c++ most used libraries [on hold]

    - by Basaa
    I'm trying to find out whether or not I want to switch from Java to c++ for my OpenGL game programming. I now have setup a test project in VS 11 professional, with GLUT. I created my windows with GLUT, and I can render OpenGL primitives without any problems. Now my question: What library(s) is/are used mostly in the indie/semi professional industry for using OpenGL in c++? With 'using OpenGL' I mean: Creating and managing an OpenGL window Actually using the OpenGL API Handling user-input (keyboard/mouse)

    Read the article

  • how to organize rendering

    - by Irbis
    I use a deferred rendering. During g-buffer stage my rendering loop for a sponza model (obj format) looks like this: int i = 0; int sum = 0; map<string, mtlItem *>::const_iterator itrEnd = mtl.getIteratorEnd(); for(map<string, mtlItem *>::const_iterator itr = mtl.getIteratorBegin(); itr != itrEnd; ++itr) { glActiveTexture(GL_TEXTURE0 + 0); glBindTexture(GL_TEXTURE_2D, itr->second->map_KdId); glDrawElements(GL_TRIANGLES, indicesCount[i], GL_UNSIGNED_INT, (GLvoid*)(sum * 4)); sum += indicesCount[i]; ++i; glBindTexture(GL_TEXTURE_2D, 0); } I sorted faces based on materials. I switch only a diffuse texture but I can place there more material properties. Is it a good approach ? I also wonder how to handle a different kind of materials, for example: some material use a normal map, other doesn't use. Should I have a different shaders for them ?

    Read the article

  • Viewport.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Viewport.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • Possible / How to render to multiple back buffers, using one as a shader resource when rendering to the other, and vice versa?

    - by Raptormeat
    I'm making a game in Direct3D10. For several of my rendering passes, I need to change the behavior of the pass depending on what is already rendered on the back buffer. (For example, I'd like to do some custom blending- when the destination color is dark, do one thing; when it is light, do another). It looks like I'll need to create multiple render targets and render back and forth between them. What's the best way to do this? Create my own render textures, use them, and then copy the final result into the back buffer. Create multiple back buffers, render between them, and then present the last one that was rendered to. Create one render texture, and one back buffer, render between them, and just ensure that the back buffer is the final target rendered to I'm not sure which of these is possible, and if there are any performance issues that aren't obvious. Clearly my preference would be to have 2 rather than 3 default render targets, if possible.

    Read the article

  • Rotating 2D Object

    - by Vico Pelaez
    Well I am trying to learn openGL and want to make a triangle move one unit (0.1) everytime I press one of the keyboard arrows. However i want the triangle to turn first pointing the direction where i will move one unit. So if my triangle is pointing up and I press right the it should point right first and then move one unit in the x axis. I have implemented the code to move the object one unit in any direction, however I can not get it to turn pointing to the direction it is going. The initial position of the Triangle is pointing up. #define LENGTH 0.05 float posX = -0.5, posY = -0.5, posZ = 0; float inX = 0.0 ,inY = 0.0 ,inZ = 0.0; // what values???? void rect(){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glPushMatrix(); glTranslatef(posX,posY,posZ); glRotatef(rotate, inX, inY, inZ); glBegin(GL_TRIANGLES); glColor3f(0.0, 0.0, 1.0); glVertex2f(-LENGTH,-LENGTH); glVertex2f(LENGTH-LENGTH, LENGTH); glVertex2f(LENGTH, -LENGTH); glEnd(); glPopMatrix(); } void display(){ //Clear Window glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); rect(); glFlush(); } void init(){ glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); } float move_unit = 0.01; bool change = false; void keyboardown(int key, int x, int y) { switch (key){ case GLUT_KEY_UP: if(rotate = 0) posY += move_unit; else{ inX = 1.0; rotate = 0; } break; case GLUT_KEY_RIGHT: if(rotate = -90) posX += move_unit; else{ inX = 1.0; // is this value ok?? rotate -= 90; } break; case GLUT_KEY_LEFT: if(rotate = 90) posX -= move_unit; else{ inX = 1.0; // is this value ok??? rotate += 90; } break; case GLUT_KEY_DOWN: if(rotate = 180) posY -= move_unit; else{ inX = 1.0; rotate += 180; } break; case 27: // Escape button exit(0); break; default: break; } glutPostRedisplay(); } int main(int argc, char** argv){ glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500,500); glutInitWindowPosition(0, 0); glutCreateWindow("Triangle turn"); glutSpecialFunc(keyboardown); glutDisplayFunc(display); init(); glutMainLoop()

    Read the article

  • How to handle the different frame rate on different devices?

    - by Fenwick
    I am not quite sure how frame per second works on a web page. I have a Canvas game that involves in moving an image from point A to B, and measuring the time elapsed. The code can be as simple as: var timeStamp = Date.now(); function update(){ obj.y += obj.speed; text = "Time: "+ (Date.now() - timeStamp) + "ms"; } The function update() is called every frame. The problem is that the time elapsed is different from device to device. It is pretty short on my PC, but get longer on my iPad, and is much longer on my cell phone. I thought it is because the FPS is smaller on mobile devices, so instead of calling update() every frame, I call it every 1ms by using a setInterval. But this does not solve the problem. In my understanding, the function for setInterval is invoked based on the increment in system time, other than frame rate, so it should fix the problem. Am I missing anything here? If the setInterval function is called based on FPS, is there any way to get around with the FPS difference across devices? On a side note, I have sort of a "water simulator" on the same canvas. It involves in redrawing about 60 objects which can be 600x600 pixels for every frame, so it could be a frame rate killer. I am using Phaser.js but not really using much of its functionalities, if that helps.

    Read the article

  • What is going on in this SAT/vector projection code?

    - by ssb
    I'm looking at the example XNA SAT collision code presented here: http://www.xnadevelopment.com/tutorials/rotatedrectanglecollisions/rotatedrectanglecollisions.shtml See the following code: private int GenerateScalar(Vector2 theRectangleCorner, Vector2 theAxis) { //Using the formula for Vector projection. Take the corner being passed in //and project it onto the given Axis float aNumerator = (theRectangleCorner.X * theAxis.X) + (theRectangleCorner.Y * theAxis.Y); float aDenominator = (theAxis.X * theAxis.X) + (theAxis.Y * theAxis.Y); float aDivisionResult = aNumerator / aDenominator; Vector2 aCornerProjected = new Vector2(aDivisionResult * theAxis.X, aDivisionResult * theAxis.Y); //Now that we have our projected Vector, calculate a scalar of that projection //that can be used to more easily do comparisons float aScalar = (theAxis.X * aCornerProjected.X) + (theAxis.Y * aCornerProjected.Y); return (int)aScalar; } I think the problems I'm having with this come mostly from translating physics concepts into data structures. For example, earlier in the code there is a calculation of the axes to be used, and these are stored as Vector2, and they are found by subtracting one point from another, however these points are also stored as Vector2s. So are the axes being stored as slopes in a single Vector2? Next, what exactly does the Vector2 produced by the vector projection code represent? That is, I know it represents the projected vector, but as it pertains to a Vector2, what does this represent? A point on a line? Finally, what does the scalar at the end actually represent? It's fine to tell me that you're getting a scalar value of the projected vector, but none of the information I can find online seems to tell me about a scalar of a vector as it's used in this context. I don't see angles or magnitudes with these vectors so I'm a little disoriented when it comes to thinking in terms of physics. If this final scalar calculation is just a dot product, how is that directly applicable to SAT from here on? Is this what I use to calculate maximum/minimum values for overlap? I guess I'm just having trouble figuring out exactly what the dot product is representing in this particular context. Clearly I'm not quite up to date on my elementary physics, but any explanations would be greatly appreciated.

    Read the article

  • Tiled perlin/value noise texture with (2^n)+1 size

    - by tobi
    Actually what I have in mind is value noise I think, but what I am going to ask applies to both of them. It is known that if you want to produce tiled texture by using the perlin/value noise, the size of the texture should be specified as the power of 2 (2^n). Without any modifications to the algorithm when you use the size of (2^n)+1 the texture cannot be tiled anymore, so I am wondering whether it is possible (by modifying the algorithm somehow) to generate such tiling texture with the size of (2^n)+1. The article (from which I have my implementation) is here: http://devmag.org.za/2009/04/25/perlin-noise/ I am aware that I can produce texture with 2^n size and just copy twice the last column/row from the ends to make it (2^n)+1, but I don't want to, because such repetitions are visible too much.

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • What are the pros and cons of non-unique display names?

    - by Davy8
    I know of at least big title game (Starcraft II) that doesn't require unique display names, so it would seem like it can work in at least some circumstance. Under what situations does allowing non-unique display names work well? When does it not work well? Does it come down to whether or not impersonation of someone else is a problem? The reasons I believe it works for Starcraft II is that there isn't any kind of in-game trading of virtual goods and other than "for kicks" there isn't much incentive to impersonate someone else in the game. There's also ladder rankings so even trying to impersonate a pro is easily detectable unless you're on a similar skill level. What are some other cases where it makes sense to specifically allow or disallow duplicate display names?

    Read the article

  • How can I run and jump at the same time?

    - by Jan
    I'm having some trouble with the game I started. http://testing.fyrastudio.com/lab/tweetOlympics/v0.002/ The thing is that i have an athlete running and he must jump at the same time. A race with obstacles. I have him running (with pressing the letter Q repeateadly). I also have him jumping (with letter P) But the thing is that when he runs and jumps at the same time, he seems to be jumping at the same place, instead of going forward with the jump... any ideas how can I fix this?? This is the code I'm using for running and jumping on a continuos loop. //if accelearing and the last time that he accelerated was less than X seconds ago, hes running an accelaring if (athlete.accelerating && timeCurrent - athlete.last_acceleration > athlete.delay_acceleration) { athlete.accelerating = false; athlete.last_acceleration = timeCurrent; athlete.running = true; } if (!athlete.accelerating && timeCurrent - athlete.last_acceleration > athlete.delay_acceleration) { athlete.decelerating = true; } if(athlete.decelerating && timeCurrent - athlete.last_deceleration > athlete.delay_deceleration){ if(athlete.speed >= 1){ //athlete starts to decelarate athlete.last_deceleration = timeCurrent; athlete.decelerate(); }else { athlete.running = false; } } if (athlete.running) { athlete.position += athlete.speed; } if (athlete.jumping) { if (athlete.jump_height < 1) { athlete.jump_height = 1; }else { if (athlete.jump_height >= athlete.jump_max_height) { athlete.jump_height = athlete.jump_max_height; athlete.jumping = false; }else { athlete.jump_height = athlete.jump_height * athlete.jump_speed; } } } if (!athlete.jumping) { if(athlete.jump_height > 1){ athlete.jump_height = athlete.jump_height * 0.9; }else { athlete.jump_height = 1; } } athlete.scaleX = athlete.scaleY = athlete.jump_height; athlete.x = athlete.position; Thanks!

    Read the article

  • Collision 2D Quads

    - by Vico Pelaez
    I want to detect collision between two 2D squares, one square is static and the other one moves according to keyboard arrows. I have implemented some code, however nothing happens when they overlap each other and what I tried to achieve in the code was to detect an overlapping between them. I think I am either not understanding the concept really well or that because one of the squares is moving this is not working. Please I would really appreciate your help. Thank you! float x1=0.05 ,Y1=0.05; float x2=0.05 ,Y2=0.05; float posX1 =0.5, posY1 = 0.5; float movX2 = 0.0 , movY2 = 0.0; struct box{ int width=0.1; int heigth=0.1; }; void init(){ glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); } void quad1(){ glTranslatef(posX1, posY1, 0.0); glBegin(GL_POLYGON); glColor3f(0.5, 1.0, 0.5); glVertex2f(-x1, -Y1); glVertex2f(-x1, Y1); glVertex2f(x1,Y1); glVertex2f(x1,-Y1); glEnd(); } void quad2(){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glPushMatrix(); glTranslatef(movX2, movY2, 0.0); glBegin(GL_POLYGON); glColor3f(1.5, 1.0, 0.5); glVertex2f(-x2, -Y2); glVertex2f(-x2, Y2); glVertex2f(x2,Y2); glVertex2f(x2,-Y2); glEnd(); glPopMatrix(); } void reset(){ //Reset position of square??? movX2 = 0.0; movY2 = 0.0; collisionB = false; } bool collision(box A, box B){ int leftA, leftB; int rightA, rightB; int topA, topB; int bottomA, bottomB; //Calculate the sides of box A leftA = x1; rightA = x1 + A.width; topA = Y1; bottomA = Y1 + A.heigth; //Calculate the sides of box B leftB = x2; rightB = x2 + B.width; topB = Y1; bottomB = Y1+ B.heigth ; if( bottomA <= topB ) return false; if( topA >= bottomB ) return false; if( rightA <= leftB ) return false; if( leftA >= rightB ) return false; return true; } float move_unit = 0.1; void keyboardown(int key, int x, int y) { switch (key){ case GLUT_KEY_UP: movY2 += move_unit; break; case GLUT_KEY_RIGHT: movX2 += move_unit; break; case GLUT_KEY_LEFT: movX2 -= move_unit; break; case GLUT_KEY_DOWN: movY2 -= move_unit; break; default: break; } glutPostRedisplay(); } void display(){ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); cuad1(); if (!collision) { cuad2(); } else{ reset(); } glFlush(); } int main(int argc, char** argv){ glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500,500); glutInitWindowPosition(0, 0); glutCreateWindow("Collision Practice"); glutSpecialFunc(keyboardown); glutDisplayFunc(display); init(); glutMainLoop(); }

    Read the article

  • How does a segment-based rendering engine (as in Descent) work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • How to ignore collision between two objects

    - by eren_trigger
    I have a player that shoots in the direction that it is facing. However, the shot that is created when I click, also destroys the player. How would I make the shot ignore collision with the player? Or better yet, how to make a shot destroy anything it touches and destroy itself without affecting the player? This is the code that controls collisions: function OnTriggerEnter (col : Collider) { Destroy(col.gameObject); } The shot is a trigger, but the player isn't. Not sure if this changes anything in this case. Thanks in advance. EDIT: http://gfycat.com/TediousAridFeline

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • Which code module should map physical keys to abstract keys?

    - by Paul Manta
    How do you bridge the gap between the library's low-level event system and your engine's high-level event system? (I'm not necessarily talking about key events, but also about quit events.) At the top level of my event system, I send out KeyPressedEvents, KeyRelesedEvents and others of this kind. These high-level events only contain the abstract values of the keys (they don't say that Space way pressed, but that the JumpKey was pressed, for example). Whose responsibility should it be to map the "JumpKey" to an actual key on the keyboard?

    Read the article

  • LibGDX - SpriteBatch's .draw() method requiring float[]

    - by just_a_programmer
    Please excuse my lack of knowledge with LibGDX, as I have just started learning it. I am going through some simple tutorials, and in one of them, I draw a string onto the screen like so: // the following code is in the main file in the core project folder: // this is in the create() method: private SpriteBatch batch; batch = new SpriteBatch(); // this is in the render() method: batch.draw(batch, "Hello world", 200, 200); I am getting an error saying: The method draw(texture, float[], int, int) in the type SpriteBatch is not applicable for the arguments (SpriteBatch, int, int) So, LibGDX wants a float array to draw instead of a string? Thanks in advance.

    Read the article

  • 2D mouse coordinates from 3d object projection

    - by user17753
    Not entirely certain of the nomenclature here -- basically, after placing a model in world coordinates and setting up a 3D camera to look at it the model has been projected onto the screen in a 2D fashion. What I'd like to do is determine if the mouse is inside the projected view of the model. Is there a way to "unproject" in the XNA framework? Or what is this process called as, so that I can better search for it?

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >