Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 556/1130 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • Square game map rendered as sphere with OpenGL

    - by Roflha
    Okay so I have been trying to find a good way to do this for a while now and so far I have nothing. For a hobby project of mine I have created a finite voxel world (similar to minecraft), but as I said, mine is finite. When you reach the edge of it, you are sent to the other side. That is all working fine along with rendering the far side of the map, but I want to be able to render this grid as a sphere. Looking down from above, the world is a square. I basically want to be able to represent a portion of that square as a sphere, as if you were looking at a planet. Right now I am experimenting with taking a circular section of the map, and rendering that, but it look to flat (no curvature around the edges). My question then, is what would be the best way to add some curvature to the edges of a 2d circle to make it look like a hemisphere. However, I am not overly attached to this implementation so if somebody has some other idea for representing the square as a planet, I am all ears.

    Read the article

  • Double Buffering in Panda3D (C++)

    - by jsvcycling
    How would I go about using Double Buffering (to create a loading screen) in Panda3D using C++? I've searched Google and found some forums that talk about the concept of swapping buffers, but I haven't seen any that show any type of source code (specifically Panda3D/C++). I'd like to try and stay away from using pure OpenGL code and work it through Panda3D, but if I have no other choice, then I'll have to go with OpenGL coding.

    Read the article

  • c++ most used libraries [on hold]

    - by Basaa
    I'm trying to find out whether or not I want to switch from Java to c++ for my OpenGL game programming. I now have setup a test project in VS 11 professional, with GLUT. I created my windows with GLUT, and I can render OpenGL primitives without any problems. Now my question: What library(s) is/are used mostly in the indie/semi professional industry for using OpenGL in c++? With 'using OpenGL' I mean: Creating and managing an OpenGL window Actually using the OpenGL API Handling user-input (keyboard/mouse)

    Read the article

  • Camera placement sphere for an always fully visible object

    - by BengtR
    Given an object: With the bounds [x, y, z, width, height, depth] And an orthographic projection [left, right, bottom, top, near, far] I want to determine the radius of a sphere which allows me to randomly place my camera on so that: The object is fully visible from all positions on this sphere The sphere radius is the smallest possible value while still satisfying 1. Assume the object is centered around the origin. How can I find this radius? I'm currently using sqrt(width^2 + height^2 + depth^2) but I'm not sure that's the correct value, as it doesn't take the camera into account. Thanks for any advice. I'm sorry for confusing a few things here. My comments below should clarify what I'm trying to do actually.

    Read the article

  • point to rectangle distance

    - by john smith
    I have a 2D rectangle with x, y position and it's height and width and a randomly positioned point nearby it. Is there a way to check if this point might collide with the rectangle if closer than a certain distance? like imagine an invisible radius outside of that point colliding with said rectangle. I have problems with this simply because it is not a square, it would be so much easier this way! Any help? Thanks in advance.

    Read the article

  • Failing Screen Resize Method

    - by StrongJoshua
    So I want my game to draw to a specific "optimal" size and then be stretched to fit screens that are a different size. I'm using LibGDX and figured that I could just draw everything to a FrameBuffer and then resize that buffer to the appropriate size when drawing it to the actual display. However, my method does not work, it just results in a black screen with the top right quarter of the screen white.Intermediary is the FBO, interMatrix is a Matrix4 object, and camera is an OrthographicCamera. @Override public void render() { // update actors currentStage.act(); //render to intermediary buffer batch.setProjectionMatrix(interMatrix); intermediary.begin(); batch.begin(); currentStage.draw(); batch.flush(); intermediary.end(); //resize to actual width and height Sprite s = new Sprite(intermediary.getColorBufferTexture()); s.flip(true, false); batch.setProjectionMatrix(camera.combined); batch.draw(s.getTexture(), 0, 0, width, height); batch.end(); } These are the constructors for the above mentioned objects (GAME_WIDTH and HEIGHT are the "optimal" settings, width and height are the actual sizes, which are the same when running on desktop). intermediary = new FrameBuffer(Format.RGBA8888, GAME_WIDTH, GAME_HEIGHT, false); interMatrix = new Matrix4(); camera = new OrthographicCamera(width, height); interMatrix.setToOrtho2D(0, 0, GAME_WIDTH, GAME_HEIGHT); Is there a better way of doing this or can is this a viable option and how do I fix what I have?

    Read the article

  • What is the recommended library for using Lua from C++?

    - by DevilWithin
    I am currently planning how to integrate Lua scripting in my 2D Game Engine, and i would like to go straight to the most adequate solution for having C++ classes and objects exposed. I've read this (if it helps you help): http://lua-users.org/wiki/BindingCodeToLua If you have a better scripting language to recomend, go for it ;D All help is welcome, i need to pickup the best solution to start implementing Thanks

    Read the article

  • Edge flicker when moving Camera (2D)

    - by Matthias Reisner
    I have a Orthographic camera. I have a fixed landscape texture and a texture for a moveable object. If the object moves to the right the camera will also move with the object. When I also draw an score text that should have fixed position on the screen, that score text position will be update too if the camera's position gets updated so that it looks like that it is fixed on the screen. But if I do that, I have some edge flickering at the text object. I'am using SpriteBatch! Is there another approach to implement a fixed positioned object on the screen?

    Read the article

  • how to organize rendering

    - by Irbis
    I use a deferred rendering. During g-buffer stage my rendering loop for a sponza model (obj format) looks like this: int i = 0; int sum = 0; map<string, mtlItem *>::const_iterator itrEnd = mtl.getIteratorEnd(); for(map<string, mtlItem *>::const_iterator itr = mtl.getIteratorBegin(); itr != itrEnd; ++itr) { glActiveTexture(GL_TEXTURE0 + 0); glBindTexture(GL_TEXTURE_2D, itr->second->map_KdId); glDrawElements(GL_TRIANGLES, indicesCount[i], GL_UNSIGNED_INT, (GLvoid*)(sum * 4)); sum += indicesCount[i]; ++i; glBindTexture(GL_TEXTURE_2D, 0); } I sorted faces based on materials. I switch only a diffuse texture but I can place there more material properties. Is it a good approach ? I also wonder how to handle a different kind of materials, for example: some material use a normal map, other doesn't use. Should I have a different shaders for them ?

    Read the article

  • LWJGL GL_QUADS texture artifact

    - by Dajgoro Labinac
    I managed to get working lwjgl in Java, and i loaded a test image(tv test card), but i keep getting weird artifacts outside the image. Image link: http://tinypic.com/r/vhv9g/6 Code: glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex2i(10, 10); glTexCoord2f(1, 0); glVertex2i(500, 10); glTexCoord2f(1, 1); glVertex2i(500, 500); glTexCoord2f(0, 1); glVertex2i(10, 500); glEnd(); What could be the cause?

    Read the article

  • Binding BoundingSpheres to a world matrix in XNA

    - by NDraskovic
    I made a program that loads the locations of items on the scene from a file like this: using (StreamReader sr = new StreamReader(OpenFileDialog1.FileName)) { String line; while ((line = sr.ReadLine()) != null) { red = line.Split(','); model = row[0]; x = row[1]; y = row[2]; z = row[3]; elements.Add(Convert.ToInt32(model)); data.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfepheres.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } I also have a list of BoundingSpheres (called spheres) that adds a new bounding sphere for each line from the file. In this program I have one item (a simple box) that moves (it has its world matrix called matrixBox), and other items are static entire time (there is a world matrix that holds those elements called simply world). The problem i that when I move the box, bounding spheres move with it. So how can I bind all BoundingSpheres (except the one corresponding to the box) to the static world matrix so that they stay in their place when the box moves?

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • starting rails in test environment

    - by Brian D.
    I'm trying to load up rails in the test environment using a ruby script. I've tried googling a bit and found this recommendation: require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' This seems to load up my environment alright, but my development database is still being used. Am I doing something wrong? Here is my database.yml file... however I don't think it is the issue development: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_development pool: 5 username: root password: dev host: localhost # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_test pool: 5 username: root password: dev host: localhost production: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_production pool: 5 username: root password: dev host: localhost I can't use ruby script/server -e test because I'm trying to run ruby code after I load rails. More specifically what I'm trying to do is: run a .sql database script, load up rails and then run automated tests. Everything seems to be working fine, but for whatever reason rails seems to be loading in the development environment instead of the test environment. Here is a shortened version of the code I am trying to run: system "execute mysql script here" require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' describe Blog do it "should be initialized successfully" do blog = Blog.new end end I don't need to start a server, I just need to load my rails code base (models, controllers, etc..) so I can run tests against my code. Thanks for any help.

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • Why are only some of my objects being rendered?

    - by BleedObsidian
    Every time I create a new asteroid the previous one is no longer rendered? I did some debugging and printed out the size of Array-List 'Small' and when a new asteroid is created it doesn't go down, so the thread is still there it's just not being rendered, Why? StatePlay: package me.bleedobsidian.astroidjump; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class StatePlay extends BasicGameState { int stateID = 10; Player player; Asteroids asteroids; StatePlay(int stateID) { this.stateID = stateID; } @Override public int getID() { return stateID; } @Override public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { ResManager.loadImages(); player = new Player(); asteroids = new Asteroids(); } @Override public void render(GameContainer gc, StateBasedGame sbg, Graphics g) throws SlickException { g.setAntiAlias(true); player.render(g); asteroids.render(g); g.drawString("Asteroids: " + Asteroids.small.size(), 10, 25); } @Override public void update(GameContainer gc, StateBasedGame sbg, int delta) throws SlickException { player.update(gc, delta); asteroids.update(delta); } } Asteroids: package me.bleedobsidian.astroidjump; import java.util.ArrayList; import java.util.Timer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import org.newdawn.slick.SpriteSheet; public class Asteroids { public static ArrayList<Asteroid_Small> small = new ArrayList<Asteroid_Small>(); static SpriteSheet small_sprites = new SpriteSheet(ResManager.asteroids_small_ss, 32, 32); static Image small_1 = small_sprites.getSubImage(0, 0); static Image small_2 = small_sprites.getSubImage(1, 0); static Image small_3 = small_sprites.getSubImage(2, 0); static Image small_4 = small_sprites.getSubImage(3, 0); static boolean asteroids = true; static int diff = 0; Asteroids() { Task_Asteroids TaskA = new Task_Asteroids(); Timer timer = new Timer("Asteroids"); if(diff == 0) { timer.schedule(TaskA, 0, 4000); } else if(diff == 1) { timer.schedule(TaskA, 0, 3000); } } public static Image chooseSmallImage(int i) { if(i == 0) { return small_1; } else if(i == 1) { return small_2; } else if(i == 2) { return small_3; } else if(i == 3) { return small_4; } else { return small_1; } } public static void level_manager(float x) { if(x < 1000) { diff = 0; } else if(x < 2000) { diff = 1; } else if(x < 3000) { diff = 2; } else if(x < 5000) { diff = 3; } else if(x < 10000) { diff = 4; } else { diff = 5; } } public void update(int delta) { for(int s = 0; s < small.size(); s++) { Asteroid_Small as = small.get(s); as.update(delta); } } public void render(Graphics g) { for(int s = 0; s < small.size(); s++) { Asteroid_Small as = small.get(s); as.render(g); } } public static void setAsteroids(boolean tf) { asteroids = tf; } } Asteroid_Small: package me.bleedobsidian.astroidjump; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; public class Asteroid_Small { private static Image me; private static float x = 0; private static float y = 0; private static float speed = 0; private static float rotation = 0; private static float rotation_speed = 0; Asteroid_Small(Image i, float x, float y, float rs, float sp) { me = i; Asteroid_Small.x = x; Asteroid_Small.y = y; Asteroid_Small.rotation_speed = rs; Asteroid_Small.speed = sp; } public void update(int delta) { x -= speed * delta; rotation += rotation_speed * delta; me.setRotation(rotation); } public void render(Graphics g) { g.drawImage(me, x, y); } } Task_Asteroid: package me.bleedobsidian.astroidjump; import java.util.TimerTask; public class Task_Asteroids extends TimerTask { public void run() { if(Asteroids.diff == 0) { int randImage = (int) (Math.random() * 4); int randHeight = (int) (Math.random() * 480); Asteroids.small.add(new Asteroid_Small(Asteroids.chooseSmallImage(randImage), Player.x + 960, randHeight, 0.05f, 0.04f)); } } }

    Read the article

  • OpenGL depth texture wrong

    - by CoffeeandCode
    I have been writing a game engine for a while now and have decided to reconstruct my positions from depth... but how I read the depth seems to be wrong :/ What is wrong in my rendering? How I init my depth texture in the FBO gl::BindTexture(gl::TEXTURE_2D, this->textures[0]); // Depth gl::TexImage2D( gl::TEXTURE_2D, 0, gl::DEPTH32F_STENCIL8, width, height, 0, gl::DEPTH_STENCIL, gl::FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr ); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_EDGE); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_EDGE); gl::FramebufferTexture2D( gl::FRAMEBUFFER, gl::DEPTH_STENCIL_ATTACHMENT, gl::TEXTURE_2D, this->textures[0], 0 ); Linear depth readings in my shader Vertex #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment (where the issue probably is) #version 150\n uniform sampler2D depth_texture; in vec2 uv_f; out vec4 Screen; void main(){ float n = 0.00001; float f = 100.0; float z = texture(depth_texture, uv_f).x; float linear_depth = (n * z)/(f - z * (f - n)); Screen = vec4(linear_depth); // It ISN'T because I don't separate alpha } When Rendered so gamedev.stackexchange, what's wrong with my rendering/glsl?

    Read the article

  • Whole continent simulation [on hold]

    - by user2309021
    Let's suppose I am planning to create a simulation of an entire continent at some point in the past (let's say, around 0 A.D). Is it feasible to spawn a hundred million actors that interact with each other and their environments? Having them reproduce, extract resources, etc? The fact is that I actually want to create a simulation that allows me to zoom in from a view of the entire continent up to a single village, and interact with it. (Think as if you could keep zooming in the campaign map of any Total War game and the transition to the battle map was seamless, not a change of the "game mode"). By the way, I have never made a game in my entire life (I have programmed normal desktop applications, though), so I am really having trouble wrapping my head around how to implement such a thing. Even while thinking about how to implement a simple population simulator, without a graphical interface, I think that the O(n) complexity of traversing an array and telling all people to get one year older each time the program ticks is kind of stupid. Any kind help would be greatly appreciated :) EDIT: After being put on hold, I shall specify a question. How would you implement a simulation of all basic human dynamics (reproduction, resource consumption) in an entire continent (with millions of people)?

    Read the article

  • How can I get textures on edge of walls like in Super Metroid and Aquaria?

    - by meds
    Games like Super Metroid and Aquaria present the terrain with the other facing parts having rocks and stuff while deeper behind them (i.e. underground) there's different detail or just black. I would like to do something similar using polygons. Terrain is created in my current level as a set of overlapping square boxes. I'm not sure if this rendering method will work such a system for creating terrain but if anyone has ideas I'd love to hear them. Otherwise I'd like to know how I should re-write the terrain rendering system so it actually works to draw terrain in this manner...

    Read the article

  • What's the best way to add some particle or laser effects to an already animated character?

    - by Scott
    I just purchased some rigged and animated robot characters from 3drt for a game I'm making in unity. I would like to be able to add some weapon effects to the characters. For example, I would like for the robots to be able to shot lasers out of the hands at enemies. I have know idea where to even start with this task as I'm more of a programmer than a graphics guy. Can some experienced developers / designers please point me in a good direction? Thanks. Note: As of right now I have maya and blender installed on my computer.

    Read the article

  • How does a single programmer make a game?

    - by Mike
    I have always been a software developer, but lately I've been wanting to get into games. The only thing stopping me is the fact that I'm a programmer, not an artist. I've made some simple stuff, Tetris, 2D chess things like that but I can't do much art and that's really what holds me back. Now the problem is, I've yet to go to college so most commercial projects wouldn't accept me even to work for free and learn a bit especially with my lack of experience in games and any indie projects I've looked into really have an issue with responding to people interested, or actually completing (or starting really, most don't get past the ideas on paper) the project they want to do. I've looked around locally for artists, anyone who can do modeling, textures or animating or even anyone with some ability to make some more advanced 2D assets to get something like a side-scrolling RPG or something but haven't been able to find anyone. So how do you guys do it? Do I really just have to wait until I can go to college to see if I like working with games or is there some way I can get art (for free, anything I do is just going to be for fun so I don't want to have to sink money into it) and just start messing around on my own? Or am I just having bad luck and not looking in the right places for other people interested in having me help? I'm not looking for anything in particular, just something to fill some time with and see if I like making games. If not, well I'll go back to my software projects. I just have one more year of highschool and I'd like to try a few different areas before I go to college.

    Read the article

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • How to set sprite source coordinates?

    - by ChaosDev
    I am creating own sprite drawer with DX11 on C++. Works fine but I dont know how to apply source rectangle to texture coordinates of rendering surface(for animation sprite sheets) //source = (0,0,32,64); //RECT D3DXVECTOR2 t0 = D3DXVECTOR2( 1.0f, 0.0f); D3DXVECTOR2 t1 = D3DXVECTOR2( 1.0f, 1.0f); D3DXVECTOR2 t2 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t3 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t4 = D3DXVECTOR2( 0.0f, 0.0f); D3DXVECTOR2 t5 = D3DXVECTOR2( 1.0f, 0.0f); VertexPositionColorTexture vertices[] = { { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t0}, { D3DXVECTOR3( dest.left+dest.right, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t1}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t2}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t3}, { D3DXVECTOR3( dest.left , dest.top, z),D3DXVECTOR4(1,1,1,1), t4}, { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t5}, };

    Read the article

  • Level Representation in a 2D Game

    - by meszar.imola
    I would like to create a 2D game, where a character should move on a stage/level. My stage would be static, constructed some little cubes, similar to the well-known Mario game: some of the elements should represent an element of the way where the character can step, but if the element is missing, the character should fall. My problem is, how to represent this programmatically? My first thought was to represent the stage with a vector, which should contain boolean elements, depending on the state of the element on the stage - if it's missing or not. But this means, I have to verify at my character's x or y position change if it has a stage element under or not (if not, to simulate the falling of the character) - I think it is not the best practice, it's not the beautiful solution. Can you help me with some advice, how to represent the stage?

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >