Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 427/1332 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Open source clone for Starcraft

    - by sinekonata
    Two questions about a SC:Broodwar clone. Is there one yet? How likely is legal pursuit? Since almost all games I usually play now have an FOS alternative from alpha to way polished, I was wondering why can't I find one for SC, one of the biggest titans of the gaming community? So my first question is, is there a game that was made with the intention to emulate SC? Is it that I didn't look well enough? Could it really be that no one tackled what seems like a small effort compared to the creation of a game engine like Spring or games like Rigs of Rod or Minetest? And since SC is not being maintained at all shouldn't the incentive to see a bug free modable balanced version huge? What am I not getting here? In the event that there is none, is it a legal problem? Could it be that people expect Blizzard to release sources themselves? Or that developers don't see the point in having SC mechanics without the patented lore and aesthetics? And the trickier question, if I were to make SC an open source game, a total clone of it for the purposes of maintenance, modability, etc. Would Blizzard really sue a team of developer fans that just do them a favour knowing they don't lose any money from Korea broadcasts? Or would they do it not to set precedents. So thanks for reading all that, hope I'm not the only one to think it's weird that no one talks about it. See you.

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • Pix for visual studio express 2012 (Desktop)

    - by JohnB
    (Originally asked on stackoverflow) Using visual c++ express 2010 for direct3d you have to download the directX sdk and there is a tool called pix for debugging shaders, looking at 3d resources etc. With visual studio 2012 express the directx sdk is included in the windows sdk that comes with it but this does not seem to include the winpix.exe tool. Is this very useful tool still available? I guess I can still use the one from the previous sdk but it seems wrong to install the entire sdk just for that tool. Is there a version for VS2012 express that I'm missing?

    Read the article

  • Simple heart container script for 2D game (Unity)?

    - by N1ghtshade3
    I'm attempting to create a simple mobile game (C#) that involves a simple three-heart life system. After searching for hours online, many of the solutions use OnGUI (which is apparently horrible for performance) and the rest are too complicated for me to understand and add to my code. The other solutions involve using a single texture and just hiding part of it when damage is taken. In my game, however, the player should be able to go over three hearts (for example, every 100 points). Sebastian Lague's Zelda-Style Health is what I'm looking for, but even though it's a tutorial there is way too much going on that I don't need or can't customize to fit in mine. What I have so far is a script called HealthScript.cs which contains a variable lives. I have another script, PlayerPhysics.cs which calls HealthScript and subtracts a life when an enemy is hit. The part I don't get is actually drawing the hearts. I think I understand what needs to happen, I just am not experienced enough with Unity to know how. The Start function should draw three (or whatever lives is set to) hearts in the top right corner. Since the game should be resolution-independent to accommodate the various sizes of Android devices, I'd rather use scaling rather than PixelInset. When the player hits an enemy as detected by PlayerPhysics.cs, it should subtract from lives. I think that I have this working using this.GetComponent<HealthScript>().lives -= 1 but I'm not sure if it actually works. This should trigger a redraw of the hearts so that there are now two hearts. The same principle would apply for adding hearts when a score is reached, except when lives > maxHeartsPerRow, the new hearts should be drawn below the old ones. I realise I don't have much code to show but believe me; I've tried for quite some time to figure this out and have little to show for it. Any help at all would be welcome; it seems like it shouldn't take that much code to put an image on the screen for each life there is, but I haven't found anything yet. Thanks!

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • Smooth animation in Cocos2d for iOS

    - by MrDatabase
    I move a simple CCSprite around the screen of an iOS device using this code: [self schedule:@selector(update:) interval:0.0167]; - (void) update:(ccTime) delta { CGPoint currPos = self.position; currPos.x += xVelocity; currPos.y += yVelocity; self.position = currPos; } This works however the animation is not smooth. How can I improve the smoothness of my animation? My scene is exceedingly simple (just has one full-screen CCSprite with a background image and a relatively small CCSprite that moves slowly). I've logged the ccTime delta and it's not consistent (it's almost always greater than my specified interval of 0.0167... sometimes up to a factor of 4x). I've considered tailoring the motion in the update method to the delta time (larger delta = larger movement etc). However given the simplicity of my scene it's seems there's a better way (and something basic that I'm probably missing).

    Read the article

  • cocos2d-x - object creation and management in game design

    - by Jason
    How do others keep track of everything going on in their games? I am working on a new game and I am quickly realizing everything that I need to keep track of. Example: Maybe a layerManager that keeps track of all the layers and what is happening for a particular scene. Maybe a sceneManager for sharing objects among scenes But then getting to game play itself, what if you have 100 objects on the screen each with its own state and happenings, there needs tobe a way to keep track of all of that. Drawing everything out is really helping me. Can anyone share with me how they go about object tracking/management? I am seeing a few different managers and then maybe even a parent object that manages the managers..is my thinking way off? Any design patterns that may be useful for me to read about? Update: doing some reading and maybe a Factory pattern might apply.

    Read the article

  • Scripts won't affect clones - Unity3d

    - by user3666251
    I made a script which swaps two game objects on click.But the script won't work because the objects are actualy clones of the original prefab. This is the script (UnityScript): #pragma strict var object1 : GameObject; var object2 : GameObject; function OnMouseDown () { Instantiate(object2,object1.transform.position,object1.transform.rotation); Destroy(object1); } I use this script to create other game objects (clones)[c#] : using UnityEngine; using System.Collections; public class Spawner : MonoBehaviour { public GameObject[] obj; public float spawnMin = 1f; public float spawnMax = 2f; // Use this for initialization void Start () { Spawn (); } void Spawn() { Instantiate(obj[Random.Range(0, obj.GetLength(0))],transform.position, Quaternion.identity); Invoke ("Spawn", Random.Range (spawnMin, spawnMax)); } } The objects get renamed to NAME (Clone). What I wanna do is make the script affect clones too.So they will swap when I click on them.

    Read the article

  • Physics/Graphics Components

    - by Brett Powell
    I have spent the last 48 hours reading up on Object Component systems, and feel I am ready enough to start implementing it. I got the base Object and Component classes created, but now that I need to start creating the actual components I am a bit confused. When I think of them in terms of HealthComponent or something that would basically just be a property, it makes perfect sense. When it is something more general as a Physics/Graphics component, I get a bit confused. My Object class looks like this so far (If you notice any changes I should make please let me know, still new to this)... typedef unsigned int ID; class GameObject { public: GameObject(ID id, Ogre::String name = ""); ~GameObject(); ID &getID(); Ogre::String &getName(); virtual void update() = 0; // Component Functions void addComponent(Component *component); void removeComponent(Ogre::String familyName); template<typename T> T* getComponent(Ogre::String familyName) { return dynamic_cast<T*>(m_components[familyName]); } protected: // Properties ID m_ID; Ogre::String m_Name; float m_flVelocity; Ogre::Vector3 m_vecPosition; // Components std::map<std::string,Component*> m_components; std::map<std::string,Component*>::iterator m_componentItr; }; Now the problem I am running into is what would the general population put into Components such as Physics/Graphics? For Ogre (my rendering engine) the visible Objects will consist of multiple Ogre::SceneNode (possibly multiple) to attach it to the scene, Ogre::Entity (possibly multiple) to show the visible meshes, and so on. Would it be best to just add multiple GraphicComponent's to the Object and let each GraphicComponent handle one SceneNode/Entity or is the idea to have one of each Component needed? For Physics I am even more confused. I suppose maybe creating a RigidBody and keeping track of mass/interia/etc. would make sense. But I am having trouble thinking of how to actually putting specifics into a Component. Once I get a couple of these "Required" components done, I think it will make a lot more sense. As of right now though I am still a bit stumped.

    Read the article

  • Can one draw a cube using different method/drawing mode?

    - by den-javamaniac
    Hi. I've just started learning gamedev (in particular android EGL based) and have ran over a code from Pro Android Games 2 that looks as follows: /* * Copyright (C) 2007 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package opengl.scenes.cubes; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.IntBuffer; import javax.microedition.khronos.opengles.GL10; public class Cube { public Cube(){ int one = 0x10000; int vertices[] = { -one, -one, -one, one, -one, -one, one, one, -one, -one, one, -one, -one, -one, one, one, -one, one, one, one, one, -one, one, one, }; int colors[] = { 0, 0, 0, one, one, 0, 0, one, one, one, 0, one, 0, one, 0, one, 0, 0, one, one, one, 0, one, one, one, one, one, one, 0, one, one, one, }; byte indices[] = { 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; // Buffers to be passed to gl*Pointer() functions // must be direct, i.e., they must be placed on the // native heap where the garbage collector cannot vbb.asIntBuffer() // move them. // // Buffers with multi-byte datatypes (e.g., short, int, float) // must have their byte order set to native order ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length*4); vbb.order(ByteOrder.nativeOrder()); mVertexBuffer = vbb.asIntBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); ByteBuffer cbb = ByteBuffer.allocateDirect(colors.length*4); cbb.order(ByteOrder.nativeOrder()); mColorBuffer = cbb.asIntBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FIXED, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FIXED, 0, mColorBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); } private IntBuffer mVertexBuffer; private IntBuffer mColorBuffer; private ByteBuffer mIndexBuffer;} So it suggests to draw a cube using triangles. My question is: can I draw the same cube using GL_TPOLYGON? If so, isn't that an easier/more understandable way to do things?

    Read the article

  • XNA - Render texture to a rendertarget 2d via SpriteBatch error

    - by Jared B
    I got simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D... private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if(Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from Draw(GameTime gameTime). First drawScene is called, then drawPostProcessing is called. The thing is, I can't run the code because "the render target must not be set on the device when it is used as a texture." at line spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); I already found the solution, which is to draw the actual renderTarget (targetScene) to the texture so it doesn't create a reference to the loaded rendertarget. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(OutputTarget) SpriteBatch.Draw(InputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render InputTarget to OutputTarget without reference issues?

    Read the article

  • 2D XNA Tile Based Lighting. Ideas and Methods

    - by Twitchy
    I am currently working on developing a 2D tile based game, similar to the game 'Terraria'. We have the base tile and chunk engine working and are now looking to implement lighting. Instead of the tile based lighting that terraria uses, I want to implement point lights for torches, etc. I have seen Catalin Zima’s shader based shadows, and this would be perfect for the torches (point lights). My problem here is that the tiles on the surface of the world need to be illuminated, doing this by a big point light is firstly extremely expensive, but also doesn't look right. What I need help with (overall) is... To have a surface that is illuminated regardless of torches, etc. To also have point lights, or smooth tile lighting similar to Catalin Zima’s shader based shadows. Looking forward to your replies. Any ideas are appreciated.

    Read the article

  • python Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or y increments by one at every loop until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases. Until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or y keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • How do I properly use String literals for loading content?

    - by Dave Voyles
    I've been using verbatim string literals for some time now, and never quite thought about using a regular string literal until I started to come across them in Microsoft provided XNA samples. With that said, I'm trying to implement a new AudioManager class from the Net Rumble sample. I have two (2) issues here: Question 1: In my code for my GameplayScreen screen I have a folder location written as the following, and it works fine: menuButton = content.Load<SoundEffect>(@"sfx/menuButton"); menuClose = content.Load<SoundEffect>(@"sfx/menuClose"); If you notice, you'll see that I'm using a verbatim string, with a forward slash "/". In the AudioManager class, they use a regular string literal, but with two backslashes "\". I understand that one is used as an escape, but why are they BACK instead of FORWARD? (See below) soundList[soundName] = game.Content.Load<SoundEffect>("audio\\wav\\"+ soundName); Question 2: I seem to be doing everything correctly in my AudioManager class, but I'm not sure of what this line means: audioFileList = audioFolder.GetFiles("*.xnb"); I suppose that the *xnb means look for everything BUT files that end in *xnb? I can't figure out what I'm doing wrong with my file locations, as the sound effects are not playing. My code is not much different from what I've linked to above. private AudioManager(Game game, DirectoryInfo audioDirectory) : base(game) { try { audioFolder = audioDirectory; audioFileList = audioFolder.GetFiles("*.mp3"); soundList = new Dictionary<string, SoundEffect>(); for (int i = 0; i < audioFileList.Length; i++) { string soundName = Path.GetFileNameWithoutExtension(audioFileList[i].Name); soundList[soundName] = game.Content.Load<SoundEffect>(@"sfx\" + soundName); soundList[soundName].Name = soundName; } // Plays this track while the GameplayScreen is active soundtrack = game.Content.Load<Song>("boomer"); } catch (NoAudioHardwareException) { // silently fall back to silence } }

    Read the article

  • How to get quality sprite sheet generation with rotations

    - by BenMaddox
    I'm working on a game that uses sprite sheets with rotation for animations. While the effect is pretty good, the quality of the rotations is somewhat lacking. I exported a flash animation to png sequence and then used a C# app to do matrix based rotations (System.Drawing.Drawing2D.Matrix). Unfortunately, there are several places where the image gets clipped. What would you suggest for a way to get high quality rotations from either flash or the exported PNGs? A circle should fit within the same image boundaries. I don't mind a new program that I must write or an existing program I must download/buy.

    Read the article

  • WoW lua: Getting quest attributes before the QUEST_DETAIL event

    - by Matt DiTrolio
    I'd like to determine the attributes of a quest (i.e., information provided by functions such as QuestIsDaily and IsQuestCompletable) before the player clicks on the quest detail. I'm trying to write an add-on that handles accepting and completing of daily quests with a single click on the NPC, but I'm running into a problem whereby I can't find out anything about a given quest unless the quest text is currently being displayed, defeating the purpose of the add-on. Other add-ons of this nature seem to be getting around this limitation by hard-coding information about quests, an approach I don't much like as it requires constant maintenance. It seems to me that this information must be available somehow, as the game itself can properly figure out which icon to display over the head of the NPC without player interaction. The only question is, are add-on authors allowed access to this information? If so, how? EDIT: What I originally left out was that the situations I'm trying to address are when: An NPC has multiple quests The quest detail is not the first thing that shows up upon right-click Otherwise, the situation is much simpler, as I have the information I need provided immediately.

    Read the article

  • Suitability of ground fog using layered alpha quads?

    - by Nick Wiggill
    A layered approach would use a series of massive alpha-textured quads arranged parallel to the ground, intersecting all intervening terrain geometry, to provide the illusion of ground fog quite effectively from high up, looking down, and somewhat less effectively when inside the fog and looking toward the horizon (see image below). Alternatively, a shader-heavy approach would instead calculate density as function of view distance into the ground fog substrate, and output the fragment value based on that. Without having to performance-test each approach myself, I would like first to hear others' experiences (not speculation!) on what sort of performance impact the layered alpha texture approach is likely to have. I ask specifically due to the oft-cited impacts of overdraw (not sure how fill-rate bound your average desktop system is). A list of games using this approach, particularly older games, would be immensely useful: if this was viable on pre DX9/OpenGL2 hardware, it is likely to work fine for me. One big question is in regards to this sort of effect: (Image credit goes to Lume of lume.com) Notice how the vertical fog gradation is continuous / smooth. OTOH, using textured quad layers, I can only assume that layers would be mighty obvious when walking through them -- the more sparse they were, the more obvious this would be. This is in contrast to where fog planes are aligned to face the player every frame, where this coarseness would be much less obvious.

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Getting a sphere to roll down a .FBX object Unity3D/C#

    - by Timothy Williams
    I'm working on a little ramp and ball game in Unity, I modeled the ramp outside Unity and exported it to a .FBX file, then I imported the ramp in to Unity. I set up the ball and ramp, both have Rigidbodies, Ramp is set to isKinematic = true, yet when I play the game the ball just falls right through the ramp and hits the floor below it fine. So it's something wrong with the ramp. Am I doing something wrong? Are .FBX files unable to apply physics? Thanks, Tim.

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Powder games: how do they work?

    - by Marc Müller
    Hey guys, I recently found these two gems: http://powdertoy.co.uk/ http://dan-ball.jp/en/javagame/dust/ My question is: How are the physics with so many elements efficiently handled? Am I just severely underestimating modern computing power or is it possible to 'just' have a two-dimensional array, each cell of which describes what is placed at the according position and simulate each cell in every step. Or are there more complex things being done like summarising large areas of the same kind into a single data set and separating said set as needed? Are there any open-source games like this I could look at?

    Read the article

  • Does md2 support skeletal meshes?

    - by jsvcycling
    I'm creating an FPS game. I'm writing my own game engine. So far all the backend stuff is going great. I'd like to support md2 as the native file format for 3D Objects, but I also want to use skeletal meshes. Does anyone know if the md2 file format supports skeletal meshes? In-case you need to know, I'm going to use blender as my Mesh creation tool and C++ as my programming language... Thanks For got to mention, the engine is based on OpenGL... Alright, for anyone who is reading this, I just found the Doom 3 md5 specifications (http://tfc.duke.free.fr/coding/md5-specs-en.html). It gives you some help on writing a parser (see bottom of link), but the example doesn't support lighting and texture mapping (the second set of example code allows for animation). Thanks @Neverender for answering my question...

    Read the article

  • What datastructure would you use for a collision-detection in a tilemap?

    - by Solom
    Currently I save those blocks in my map that could be colliding with the player in a HashMap (Vector2, Block). So the Vector2 represents the coordinates of the blog. Whenever the player moves I then iterate over all these Blocks (that are in a specific range around the player) and check if a collision happened. This was my first rough idea on how to implement the collision-detection. Currently if the player moves I put more and more blocks in the HashMap until a specific "upper bound", then I clear it and start over. I was fully aware that it was not the brightest solution for the problem, but as said, it was a rough first implementation (I'm still learning a lot about game-design and the data-structure). What data-structure would you use to save the Blocks? I thought about a Queue or even a Stack, but I'm not sure, hence I ask.

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • Drawing multiple triangles at once isn't working

    - by Deukalion
    I'm trying to draw multiple triangles at once to make up a "shape". I have a class that has an array of VertexPositionColor, an array of Indexes (rendered by this Triangulation class): http://www.xnawiki.com/index.php/Polygon_Triangulation So, my "shape" has multiple points of VertexPositionColor but I can't render each triangle in the shape to "fill" the shape. It only draws the first triangle. struct ShapeColor { // Properties (not all properties) VertexPositionColor[] Points; int[] Indexes; } First method that I've tried, this should work since I iterate through the index array that always are of "3s", so they always contain at least one triangle. //render = ShapeColor for (int i = 0; i < render.Indexes.Length; i += 3) { device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, new VertexPositionColor[] { render.Points[render.Indexes[i]], render.Points[render.Indexes[i+1]], render.Points[render.Indexes[i+2]] }, 0, 3, new int[] { 0, 1, 2 }, 0, 1 ); } or the method that should work: device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, render.Points, 0, render.Points.Length, render.Indexes, 0, render.Indexes.Length / 3, VertexPositionColor.VertexDeclaration ); No matter what method I use this is the "typical" result from my Editor (in Windows Forms with XNA) It should show a filled shape, because the indexes are right (I've checked a dozen of times) I simply click the screen (gets the world coordinates, adds a point from a color, when there are 3 points or more it should start filling out the shape, it only draws the lines (different method) and only 1 triangle). The Grid isn't rendered with "this" shape. Any ideas?

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >