Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 324/1016 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • What's a good way to organize samplers for HLSL?

    - by Rei Miyasaka
    According to MSDN, I can have 4096 samplers per context. That's a lot, considering there's only a handful of common sampler states. That tempts me to initialize an array containing a whole bunch of common sampler states, assign them to every device context I use, and then in the pixel shaders refer to them by index using : register(s[n]) where n is the index in the array. If I want more samplers for whatever reason, I can just add them on after the last slot. Does this work? If not, when should I set the samplers? Should it be done when by the mesh renderer? The texture renderer? Or alongside PSSetShader? Edit: That trick I wrote above doesn't work (at least not yet), as the compiler gives me this error message when I try to use the same register twice: error X4500: overlapping register semantics not yet implemented 's0' So how do people usually organize samplers, then?

    Read the article

  • Is there a expected set of button mappings games commonly use?

    - by Scott Chamberlain
    I am making a game that will support a XBox 360 controller but I would like to try and keep the default button mappings to be what is expected from a user's past history from playing other games. Is there a set of guidelines from Microsoft on what should map to what (Do you use A for fire or left trigger?), or has the gaming community picked up a common set of controls that is just not written anywhere, everyone just "knows" it (like WASD for movement). The hardest thing for me is I have walking movement, vehicle movement, and airplane movement. I plan on allowing custom configuration of each, but I don't know what to set as the defaults.

    Read the article

  • What is the most efficient way to convert to binary and back in C#?

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • Implementing algorithms via compute shaders vs. pipeline shaders

    - by TravisG
    With the availability of compute shaders for both DirectX and OpenGL it's now possible to implement many algorithms without going through the rasterization pipeline and instead use general purpose computing on the GPU to solve the problem. For some algorithms this seems to become the intuitive canonical solution because they're inherently not rasterization based, and rasterization-based shaders seemed to be a workaround to harness GPU power (simple example: creating a noise texture. No quad needs to be rasterized here). Given an algorithm that can be implemented both ways, are there general (potential) performance benefits over using compute shaders vs. going the normal route? Are there drawbacks that we should watch out for (for example, is there some kind of unusual overhead to switching from/to compute shaders at runtime)? Are there perhaps other benefits or drawbacks to consider when choosing between the two?

    Read the article

  • Ray picking - get direction from pitch and yaw

    - by Isaac Waller
    I am attempting to cast a ray from the center of the screen and check for collisions with objects. When rendering, I use these calls to set up the camera: GL11.glRotated(mPitch, 1, 0, 0); GL11.glRotated(mYaw, 0, 1, 0); GL11.glTranslated(mPositionX, mPositionY, mPositionZ); I am having trouble creating the ray, however. This is the code I have so far: ray.origin = new Vector(mPositionX, mPositionY, mPositionZ); ray.direction = new Vector(?, ?, ?); My question is: what should I put in the question mark spots? I.e. how can I create the ray direction from the pitch and roll? Any help would be much appreciated!

    Read the article

  • How to move a sprite automatically using a physicsHandler in Andengine?

    - by shailenTJ
    I use a DigitalOnScreenControl (knob with a four-directional arrow control) to move the entity and the entity which is bound to a physicsHandler. physicsHandler.setEntity(sprite); sprite.registerUpdateHandler(physicsHandler); From the DigitalOnScreenControl, I know which direction I want my sprite to move. Inside its overridden onControlChange function, I call a function animateSprite that checks which direction I chose. Based on the direction, I animate my sprite differently. PROBLEM: I want to automatically move the sprite to a specific location on the scene, say at coordinates (207, 305). My sprite is at (100, 305, which means it has to move down by 107 pixels. How do I tell the physicsHandler to move the sprite down by 107 pixels? My animateSprite method will take care of animating the sprite's downward motion. Thank you for your input!

    Read the article

  • How do I repeat a texture with GLKit?

    - by Synopfab
    I am using GLKit in order to show textures on my project. The code is like this: -(void)setTextureImage:(UIImage *)image { NSError *error; texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error]; if (error) { NSLog(@"Error loading texture from image: %@",error); } } effect.texture2d0.envMode = GLKTextureEnvModeReplace; effect.texture2d0.target = GLKTextureTarget2D; effect.texture2d0.name = texture.name; glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, self.textureCoordinates); Now I want to repeat this texture on a rectangle. Is there any way use GLKit for this behavior? I've tried to use opengl function in addition to the glkit ones, but it raises errors: glEnable(GL_TEXTURE_2D); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT ); glBindTexture( GL_TEXTURE_2D, texture.name ); 2011-11-09 20:10:28.614 **[16309:207] GL ERROR: 0x0500 2011-11-09 20:10:30.840 **[16309:207] Error loading texture from image: Error Domain=GLKTextureLoaderErrorDomain Code=8 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)" UserInfo=0x68545c0 {GLKTextureLoaderGLErrorKey=1280, GLKTextureLoaderErrorKey=OpenGL error}

    Read the article

  • Rotate to a set degree then reverse and repeat in Unity

    - by Ryan
    and thank you for your time. I'm making my first project in Unity, a simple game where touching objects adds points to the players score. I'd like the objects to have a pleasant back and forth swaying animation on the Z axis. Nodding to the right 30 degrees, then to the left 30 degrees, on and on. Here's what I've got... public class Rotator : MonoBehaviour { void Update () { transform.Rotate(new Vector3(0,0,12)*Time.deltaTime); } } This gives me a nice slow rotation. But I am clueless how to tell Unity to stop at +30 degrees, reverse to -30 degrees, rotate again to +30, stop and repeat, etc, etc. I'd really appreciate any help. Maybe there is a thread like this that I was not able to find? I assume it will involve some kind of 'if than' function? Thank you, Ryan

    Read the article

  • Should I refer to browser-based games as HTML5 games or Javascript games?

    - by Bane
    First of all, I know that there are alternatives to both HTML5 and Javascript, but I worded the question so generally ("browser-based") because if I had said "HTML5" or "Javascript" games that would already imply an answer to the question. When writing wiki posts or discussing, I usually call these games "HTML5/Javascript" games. They are written in Javascript, using the new HTML5 technology. What is the proper way to call them: HTML5 or Javascript games? I see that most people opt for HTML5, why?

    Read the article

  • In general, are programmers or artists paid better?

    - by jokoon
    I'm in a private game programming school where there also are 3D art classes; sadly, there seems to be a lot more students in those latter classes, something like 50% or 100% more. So I was wondering: in the real video game industry, which of the artist/modeler or the programmer is more likely to be wanted in a company, so who will be paid more ? I'm sure there are artists which are obviously paid better than other programmers and I'm sure there are other sorts of jobs in the game industry (sound, management, testers), but I wanted to know if there is a general tendency for one or the other. And sometime I wonder even if an artist can happen to write scripts...

    Read the article

  • Slick2D - Cannot instantiate the type Image

    - by speakon
    I am getting this strange error and I cannot for the life of me figure out why: Cannot instantiate the type Image CODE: import java.awt.Image; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class MainMenuState extends BasicGameState { int stateID = -1; Image background = null; Image startGameOption = null; Image exitOption = null; float startGameScale = 1; float exitScale = 1; MainMenuState( int stateID ) { this.stateID = stateID; } public int getID() { return stateID; } public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { try { background = new Image("data/menu.jpg"); Image menuOptions = new Image("data/menuoptions.png"); startGameOption = menuOptions.getSubImage(0, 0, 377, 71); exitOption = menuOptions.getSubImage(0, 71, 377, 71); }catch (SlickException e) { System.err.print(e); } } public void render(GameContainer gc, StateBasedGame sbg, Graphics g) throws SlickException { } public void update(GameContainer gc, StateBasedGame sbg, int delta) throws SlickException { } } Why do I get this error? I've googled endlessly and nobody else has it, this worked fine in my other game. Any ideas?

    Read the article

  • 2d movement solution

    - by Phil
    Hi! I'm making a simple top-down tank game on the ipad where the user controls the movement of the tank with the left "joystick" and the rotation of the turret with the right one. I've spent several hours just trying to get it to work decently but now I turn to the pros :) I have two referencial objects, one for the movement and one for the rotation. The referencial objects always stay max two units away from the tank and I use them to tell the tank in what direction to move. I chose this approach to decouple movement and rotational behaviour from the raw input of the joysticks, I believe this will make it simpler to implement whatever behaviour I want for the tank. My problem is 1; the turret rotates the long way to the target. With this I mean that the target can be -5 degrees away in rotation and still it rotates 355 degrees instead of -5 degrees. I can't figure out why. The other problem is with the movement. It just doesn't feel right to have the tank turn while moving. I'd like to have a solution that would work as well for the AI as for the player. A blackbox function for the movement where the player only specifies in what direction it should move and it moves there under the constraints that are imposed on it. I am using the standard joystick class found in the Unity iPhone package. This is the code I'm using for the movement: public class TankFollow : MonoBehaviour { //Check angle difference and turn accordingly public GameObject followPoint; public float speed; public float turningSpeed; void Update() { transform.position = Vector3.Slerp(transform.position, followPoint.transform.position, speed * Time.deltaTime); //Calculate angle var forwardA = transform.forward; var forwardB = (followPoint.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 5) { //Rotate to transform.Rotate(new Vector3(0, (-turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 5) { transform.Rotate(new Vector3(0, (turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } transform.position = new Vector3(transform.position.x, 0, transform.position.z); } } And this is the code I'm using to rotate the turret: void LookAt() { var forwardA = -transform.right; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff - 180 > 1) { //Rotate to transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff - 180 < -1) { transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); print((angleDiff - 180).ToString()); } else { } } Since I want the turret reference point to turn in relation to the tank (when you rotate the body, the turret should follow and not stay locked on since it makes it impossible to control when you've got two thumbs to work with), I've made the TurretFollowPoint a child of the Turret object, which in turn is a child of the body. I'm thinking that I'm making it too difficult for myself with the reference points but I'm imagining that it's a good idea. Please be honest about this point. So I'll be grateful for any help I can get! I'm using Unity3d iPhone. Thanks!

    Read the article

  • Time based movement Vs Frame rate based movement?

    - by sil3nt
    Hello there, I'm new to Game programmming and SDL, and I have been following Lazyfoo's SDL tutorials. My question is related to time based motion and frame rate based motion, basically which is better or appropriate depending on situations?. Could you give me an example where each of these methods are used?. Another question I have is that, in lazyfoo's two Motion tutorials (FPS based and time based) The time based method showed a much smoother animation while the Frame rate based one was a little hiccupy, meaning you could clearly see the gap between the previous location of the dot and its current position when you compare the two programs. As beginner which method should I stick to?(all I want is smooth animations).

    Read the article

  • OpenGL ES 2.0: Vertex and Fragment Shader for 2D with Transparency

    - by Bunkai.Satori
    Could I knindly ask for correct examples of OpenGL ES 2.0 Vertex and Fragment shader for displaying 2D textured sprites with transparency? I have fairly simple shaders that display textured polygon pairs but transparency is not applied despite: texture map contains transparency information Blending is enabled: glEnable(GL_BLEND); glEnable(GL_DEPTH_TEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); My Vertex Shader: uniform mat4 uOrthoProjection; uniform vec3 Translation; attribute vec4 Position; attribute vec2 TextureCoord; varying vec2 TextureCoordOut; void main() { gl_Position = uOrthoProjection * (Position + vec4(Translation, 0)); TextureCoordOut = TextureCoord; } My Fragment Shader: varying mediump vec2 TextureCoordOut; uniform sampler2D Sampler; void main() { gl_FragColor = texture2D(Sampler, TextureCoordOut); }

    Read the article

  • Best way to start in C# and Raknet?

    - by cad
    I am trying to learn Raknet from C# and I found it extremely confusing. Raknet tutorial seems to work easy and nice in C++. I have already make some chat server code from tutorial. But I am looking to do something similar in C# and I find a mess. - Seems that I need to compile raknet using SWIG to have like an interface? - Also I have found a project called raknetdotnet but seems abandoned..(http://code.google.com/p/raknetdotnet/) So my main question is what is the best way to code in C# using raknet? As secondary questions: Anyone can recomend me good tutorial in raknet AND c#? Is there any sample C# code that I can download? I have readed lot of pages but I didn't get anything clear so I hope someone that has lived this before can help me. Thanks PD: Maybe raknet is obsolete (I find a lot of code and posts from 2007) and there is a better tool to achieve what I want. (I am interested in making a game with a dedicated server.)

    Read the article

  • Collision detection with curves

    - by paldepind
    I'm working on a 2D game in which I would like to do collision detection between a moving circle and some kind of static curves (maybe Bezier curves). Currently my game features only straight lines as the static geometry and I'm doing the collision detection by calculating the distance from the circle to the lines, and projecting the circle out of the line in case the distance is less than the circles radius. How can I do this kind of collision detection in a relative straightforward way? I know for instance that Box2D features collision detection with Bezier curves. I don't need a full featured collision detection mechanism, just something that can do what I've described.

    Read the article

  • How to programatically retarget animations from one skeleton to another?

    - by Fraser
    I'm trying to write code to transfer animations that were designed for one skeleton to look correct on another skeleton. The source animations consist only of rotations except for translations on the root (they're the mocap animations from the CMU motion capture database). Many 3D applications (eg Maya) have this facility built-in, but I'm trying to write a (very simple) version of it for my game. I've done some work on bone mapping, and because the skeletons are hierarchically similar (bipeds), I can do 1:1 bone mapping for everything but the spine (can work on that later). The problem, however, is that the base skeleton/bind poses are different, and the bones are different scales (shorter/longer), so if I just copy the rotation straight over it looks very strange: I've tried multiplying by the original bone's absolute rotation, then by the inverse of the target, and vice-versa... kind of a shot in the dark, and indeed it didn't work. (Tried relative transformations too)... I'm not sure where to go from here, so if anyone has any resources on stuff like this (papers, source code, etc), that would be really helpful. Thanks!

    Read the article

  • Best solution for multiplayer realtime Android game

    - by piotrek
    I plan to make multiplayer realtime game for Android (2-8 players), and I consider, which solution for multiplayer organization is the best: Make server on PC, and client on mobile, all communition go through server ( ClientA - PC SERVER - All Clients ) Use bluetooth, I don't used yet, and I don't know is it hard to make multiplayer on bluetooth Make server on one of devices, and other devices connect ( through network, but I don't know is it hard to resolve problem with devices over NAT ? ) Other solution ?

    Read the article

  • I made a 2D ENGINE for Android, looking for cooperation.

    - by Roger Travis
    My name is Robert, I am an Android programmer and wanted to show off my latest project - a 2d game engine. You can see it in action here - https://play.google.com/store/apps/details?id=engineDemo.com My engine's main advantage is its ease of use. To have your level up and running, you'll need only 3 lines of code. ABoxView aboxView = new ABoxView(this); setContentView(aboxView); aboxView.loadLevel("level/level02"); Level are created in a special level constructor and object physical properties are stored in a corresponding XML file. I am looking to cooperate with those, who might be interesting in using my engine in their games. You can email me at [email protected] or post here. Thanks, Robert

    Read the article

  • Matrix rotation wrong orientation LibGDX

    - by glz
    I'm having a problem with matrix rotation in libgdx. I rotate it using the method matrix.rotate(Vector3 axis, float angle) but the rotation happens in the model orientation and I need it happens in the world orientation. For example: on create() method: matrix.rotate(new Vector3(0,0,1), 45); That is ok, but after: on render() method: matrix.rotate(new Vector3(0,1,0), 1); I need it rotate in world axis.

    Read the article

  • Coordinates on the top left corner or center of the tile

    - by soimon
    I'm setting up a tile system where every tile has x and y coordinates. Right now I assume that the top left corner of the tile is positioned on it's coordinate on the screen, x = tileX * tileWidth and y = tileY x tileWidth. However, it seems strange that the tile with coordinate (0, 0) is completely drawn in the 'positive' side of the coordinate system as opposed to in the center of the origin. Is it common practice to assume that a coordinate lays in the center of a tile or at the top left corner of a tile? So basically x = tileX x tileWidth or x = tileX x tilewidth - ( tileWidth / 2 )?

    Read the article

  • AS3: limit objects to stage width?

    - by Gabriel Meono
    I want to limit the creation of objects acording to the stage width. My method is the following: for (var i:int = 0; i<7; i++){ If I put something like this, it won't work for (var i:int = 0; i<(stage.width); i++){ What I'm doing wrong? Full code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); // make a heavy circle sim.addCircle({x:3, y:3, radius:0.4, density:1}); // create a few platforms // make 26 dominoes for (var i:int = 0; i<7; i++){ //End sim.addCircle({x:1 + i * 1.5, y:18, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:17, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); } sim.start(); sim.mouseDrag();

    Read the article

  • Detecting wins in peer to peer RTS games like Starcraft

    - by user782220
    A typical RTS game is implemented with the standard networking model: peer to peer lockstep. Consider Starcraft 2, given that Battle.net presumably doesn't know anything about the state of game given that there is only communication between the two players in a peer to peer model, how does Battle.net know who was the winner in the end. Relying on the two peers to not try to cheat and report accurate results is naive.

    Read the article

  • Missing features from WebGL and OpenGL ES

    - by Chris Smith
    I've started using WebGL and am pleased with how easy it is to leverage my OpenGL (and by extension OpenGL ES) experience. However, my understanding is as follows: OpenGL ES is a subset of OpenGL WebGL is a subset of OpenGL ES Is this correct for both cases? If so, are there resources for detailing which features are missing? For example, one notable missing feature is glPushMatrix and glPopMatrix. I don't see those in WebGL, but in my searches I cannot find them referenced in OpenGL ES material either.

    Read the article

  • Passing multiple Vertex Attributes in GLSL 130

    - by Roy T.
    (note this question is closely related to this one however I didn't fully understand the accepted answer) To support videocards in laptops I have to rewrite my GLSL 330 shaders to GLSL 130. I'm trying to do this but somehow I don't get vertex attributes to work properly. My 330 shaders look like this: #version 330 layout(location = 0) in vec4 position; layout(location = 3) in vec4 color; smooth out vec4 theColor; void main() { gl_Position = position; theColor = color; } Now this explicit layout is not allowed in GLSL 130 so I referenced this page to see what the default layouts for some values would be. As you can see position should be the 0th vertex attribute and color should be the 3rd vertex attribute. Because this is a test case I had already configured my explicit layouts in the same way, which worked, so I now simply rewrote my shader to this and expected it to work: #version 130 smooth out vec4 theColor; void main() { gl_Position = gl_Vertex; theColor = gl_Color; } However this doesn't work, the value of gl_Color is always (1,1,1,1). So how should I pass multiple vertex attributes to my GLSL 130 shaders? For reference, this is how I set my vertex buffer object and attributes (I've just adapted this tutorial to JAVA+JOGL) gl.glBindBuffer(GL3.GL_ARRAY_BUFFER, vertex_buffer_id); gl.glEnableVertexAttribArray(0); gl.glEnableVertexAttribArray(3); gl.glVertexAttribPointer(0, 4 , GL3.GL_FLOAT, false, 0, 0); gl.glVertexAttribPointer(3, 4, GL3.GL_FLOAT, false, 0, 4*4*4); gl.glDrawArrays(GL3.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableVertexAttribArray(0); gl.glDisableVertexAttribArray(3); EDIT I solved the problem by querying for the layout locations of position an color using glGetAttribLocation however I still don't understand why the 'hardcoded' values like gl_Color didn't work, can't I upload data in there as normal? Shouldn't they be used?

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >