Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 552/1529 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to compile and run?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap Now here's something interesting. The error message shows it's looking in /opt/local/lib/libiconv.2.dylib. otools -L shows that this does implement 8.0.0: tppllc-Mac-Pro:.libs swirsky$ otool -L /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /usr/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) tppllc-Mac-Pro:.libs swirsky$ And, for good measure, I set the DYLD_LIBRARY_PATH to make sure this directory is the one for dynamic libraries. So even though I do have a library that provides 8.0.0, it's being seen as 7.0.0! Any ideas why this would happen? So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • I need to move an entity to the mouse location after i rightclick

    - by I.Hristov
    Well I've read the related questions-answers but still cant find a way to move my champion to the mouse position after a right-button mouse-click. I use this code at the top: float speed = (float)1/3; And this is in my void Update: //check if right mouse button is clicked if (mouse.RightButton == ButtonState.Released && previousButtonState == ButtonState.Pressed) { // gets the position of the mouse in mousePosition mousePosition = new Vector2(mouse.X, mouse.Y); //gets the current position of champion (the drawRectangle) currentChampionPosition = new Vector2(drawRectangle.X, drawRectangle.Y); // move champion to mouse position: //handles the case when the mouse position is really close to current position if (Math.Abs(currentChampionPosition.X - mousePosition.X) <= speed && Math.Abs(currentChampionPosition.Y - mousePosition.Y) <= speed) { drawRectangle.X = (int)mousePosition.X; drawRectangle.Y = (int)mousePosition.Y; } else if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)((mousePosition.X - currentChampionPosition.X) * speed); drawRectangle.Y += (int)((mousePosition.Y - currentChampionPosition.Y) * speed); } } previousButtonState = mouse.RightButton; What that code does at the moment is on a click it brings the sprite 1/3 of the distance to the mouse but only once. How do I make it move consistently all the time? It seems I am not updating the sprite at all. EDIT I added the Vector2 as Nick said and with speed changed to 50 it should be OK. I tried it with if ButtonState.Pressed and it works while pressing the button. Thanks. However I wanted it to start moving when single mouse clicked. It should be moving until reaches the mousePosition. The Edit of Nick's post says to create another Vector2, But I already have the one called mousePosition. Not sure how to use another one. //gets a Vector2 direction to move *by Nick Wilson Vector2 direction = mousePosition - currentChampionPosition; //make the direction vector a unit vector direction.Normalize(); //multiply with speed (number of pixels) direction *= speed; // move champion to mouse position if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)(direction.X); drawRectangle.Y += (int)(direction.Y); } } previousButtonState = mouse.RightButton;

    Read the article

  • Detecting extremely fast joystick button presses?

    - by DBRalir
    Is it usually possible for the player to press and release a button within a single frame, so that the game engine doesn't have time to detect it? How do programmers usually handle this situation? Is it even necessary to handle it? Specifically, I am asking about GLFW's joystick input capabilities. I am currently using GLFW to make a game, and I've noticed that keyboard and mouse have callback functions, while joysticks do not. Also, it does not appear to be possible to enable "sticky keys" for a joystick. (I have only recently started using GLFW, so please correct me if I am wrong, as having either of those would solve the problem.)

    Read the article

  • Seek Steering Behavior with Target Direction for Group of Fighters

    - by SebastianStehle
    I am implementing steering algorithms with group management for spaceships (fighters). I select a leader and assign the target positions for the other spaceships based on the target position of the leader and an offset. This works well. But when my spaceships arrive they all have a different direction. I want them to keep to look in the same direction (target - start). I also want to combine this behavior with a minimum turning radius that is based on the speed. The only idea I have is to calculate a path for each spaceship with an point before the target position, so the ships have some time left to turn into the right position. But I dont know if this is a good idea. I guess there will be a lot of rare cases where this can cause a problem. So the question is, if anybody knows how to solve this problem and has some (simple code) or pseudocode for me or at least some good explanation.

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Procedural terrains in 3D: what has been done ? Are there common algo and/or theories about it ?

    - by jokoon
    Besides programming, modeling an environment takes a great deal of time. I don't know about the work time involved, for example, in a WoW dungeon level, or other beautiful city-like, future environment, jungles, fantasy, etc, but this kind of work is made from scratch by artists. What are the techniques involved in the TorchLight level randomizer, and does other titles have similarities with this ? Is there a family name for such techniques ?

    Read the article

  • Using MVC with a retained mode renderer

    - by David Gouveia
    I am using a retained mode renderer similar to the display lists in Flash. In other words, I have a scene graph data structure called the Stage to which I add the graphical primitives I would like to see rendered, such as images, animations, text. For simplicity I'll refer to them as Sprites. Now I'm implementing an architecture which is becoming very similar to MVC, but I feel that that instead of having to create View classes, that the sprites already behave pretty much like Views (except for not being explicitly connected to the Model). And since the Model is only changed through the Controller, I could simply update the view together with the Model in the controller, as in the example below: Example 1 class Controller { Model model; Sprite view; void TeleportTo(Vector2 position) { model.Position = view.Position = position; } } The alternative, I think, would be to create View classes that wrap the sprites, make the model observable, and make the view react to changes on the model. This seems like a lot of extra work and boilerplate code, and I'm not seeing the benefits if I'm just going to have one view per controller. Example 2 class Controller { Model model; View view; void TeleportTo(Vector2 position) { model.Position = position; } } class View { Model model; Sprite sprite; View() { model.PropertyChanged += UpdateView; } void UpdateView() { sprite.Position = model.Position; } } So, how is MVC or more specifically, the View, usually implemented when using a retained-mode renderer? And is there any reason why I shouldn't stick with example 1?

    Read the article

  • Vertex Array Object (OpenGL)

    - by Shin
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • How to draw texture to screen in Unity?

    - by user1306322
    I'm looking for a way to draw textures to screen in Unity in a similar fashion to XNA's SpriteBatch.Draw method. Ideally, I'd like to write a few helper methods to make all my XNA code work in Unity. This is the first issue I've faced on this seemingly long journey. I guess I could just use quads, but I'm not so sure it's the least expensive way performance-wise. I could do that stuff in XNA anyway, but they made SpriteBatch not without a reason, I believe.

    Read the article

  • How do game engines implement certain features?

    - by Milo
    I have always wondered how modern game engines do things such as realistic water, ambient occluded lighting, eye adaptation, global illumination, etc. I'm not so much interested in the implementation details, but more on what part of the graphics API such as D3D or OpenGL allow adding such functionality. The only thing I can think of is shaders, but I do not think just shaders can do all that. So really what I'm asking is, what functions or capabilities of graphics APIs enable developers to implement these types of features into their engines? Thanks

    Read the article

  • What's the most efficient way to find barycentric coordinates?

    - by bobobobo
    In my profiler, finding barycentric coordinates is apparently somewhat of a bottleneck. I am looking to make it more efficient. It follows the method in shirley, where you compute the area of the triangles formed by embedding the point P inside the triangle. Code: Vector Triangle::getBarycentricCoordinatesAt( const Vector & P ) const { Vector bary ; // The area of a triangle is real areaABC = DOT( normal, CROSS( (b - a), (c - a) ) ) ; real areaPBC = DOT( normal, CROSS( (b - P), (c - P) ) ) ; real areaPCA = DOT( normal, CROSS( (c - P), (a - P) ) ) ; bary.x = areaPBC / areaABC ; // alpha bary.y = areaPCA / areaABC ; // beta bary.z = 1.0f - bary.x - bary.y ; // gamma return bary ; } This method works, but I'm looking for a more efficient one!

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • How do I implement smooth movement in a Box2D platform game?

    - by Romeo
    I have implemented a character in JBox2D which moves with the help of a wheel rotating at the bottom of it. The movement is the best result I've had 'till now but it's a little glitchy when the character stands on the edge. So I am thinking should I use five smaller wheels instead of a big wheel. The wheel/wheels will not be visible in the finished product, now they are drawn for debugging. Here is a video. Is there a better way to do this using JBox2D?

    Read the article

  • Interacting with scene from controller/app delegate cocos2d

    - by cjroebuck
    I'm attempting to make my first cocos2d (for iphone) multiplayer game and having difficulty understanding how to interact with a scene once it is running. The game is a simple turn-based one and so I have a GameController class which co-ordinates the rounds. I also have a GameScene class which is the actual scene that is displayed during a round of the game. The basic interaction I need is for the GameController to be able to pass messages to the GameScene class.. such as StartRound/StopRound etc. The thing that complicates this is that I am loading the GameScene with a LoadingScene class which simply initialises the scene and replaces the current scene with this one, so there is no reference from GameController to GameScene, so passing messages is quite tricky. Does anyone have any ways to get around this, ideally I would still like to use a Loading class as it smooths out the memory hit when replacing scenes.

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • Embed Unity3D and load multiple games from a single app

    - by Rafael Steil
    Is is possible to export an entire unity3d project/game as an AssetBundle and load it on iOS/Android/Windows on an app that doesn't know anything about such game beforehand? What I have in mind is something like the web plugin does - it loads a series of .unity3d files over http, and render inline in the browser window. Is it even possible to do something closer for iOS/Android? I have read a lot of docs so far, but still can't be sure: http://floored.com/blog/2013/integrating-unity3d-within-ios-native-application.html http://docs.unity3d.com/Manual/LoadingResourcesatRuntime.html http://docs.unity3d.com/Manual/AssetBundlesIntro.html The code from the post at http://forum.unity3d.com/threads/112703-Override-Unity-Data-folder-path?p=749108&viewfull=1#post749108 works for Android, but how about iOS and other platforms?

    Read the article

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

  • Adding Comments and ratings to Sharepoint document libraries or Picture libraires

    - by Emad
    Is there a ready made solution to allow users to add comments to an image posted in SharePoint picture library or basically any item in a document library? What I need is allow user who are viewing an image from a picture library for example to add comment, view comments others have left and provide some sort of voting. Similar to what you get with Facebook or Youtube. What is the easiest way to do that? preferably something free

    Read the article

  • creating a pre-menu level select screen

    - by Ephiras
    Hi I am working on creating a tower Defence java applet game and have come to a road block about implementing a title screen that i can select the level and difficulty of the rest of the game. my title screen class is called Menu. from this menu class i need to pass in many different variables into my Main class. i have used different classes before and know how to run them and such. but if both classes extend applet and each has its individual graphics method how can i run things from Main even though it was created in Menu. what i essentially want to do is run the Menu class withits action listeners and graphics until a Difficulty button has been selected, run the main class (which 100% works without having to have the Menu class) and pretty much terminate Menu so that i cannot go back to it, do not see its buttons or graphics menus. can i run one applet annd when i choose a button close that one and launch the other one? IF you would like to download the full project you can find it here, i had to comment out all the code that wasn't working my Menu class import java.awt.*; import java.awt.event.*; import java.applet.*; public class Menu extends Applet implements ActionListener{ Button bEasy,bMed,bHard; Main m; public void init(){ bEasy= new Button("Easy"); bEasy.setBounds(140,200,100,50); add(bEasy); bMed = new Button("Medium");bMed.setBounds(280,200,100,50); add(bMed); bHard= new Button("Hard");bHard.setBounds(420,200,100,50); add(bHard); setLayout(null); } public void actionPerformed(ActionEvent e){ Main m = new Main(20,10,3000,mapMed);//break; switch (e.getSource()){ case bEasy: Main m = new Main(6000,20,"levels/levelEasy.png");break;//enimies tower money world case bMed: Main m = new Main(4000,15,"levels/levelMed.png");break; case bHard: Main m = new Main(2000,10,"levels/levelEasy.png");break; default: break; } } public void paint(){ //m.draw(g) } } and here is my main class initialising code. import java.awt.*; import java.awt.event.*; import java.applet.*; import java.io.IOException; public class Main extends Applet implements Runnable, MouseListener, MouseMotionListener, ActionListener{ Button startButton, UpgRange, UpgDamage; //set up the buttons Color roadCol,startCol,finCol,selGrass,selRoad; //set up the colors Enemy e[][]; Tower t[]; Image towerpic,backpic,roadpic,levelPic; private Image i; private Graphics doubleG; //here is the world 0=grass 1=road 2=start 3=end int world[][],eStartX,eStartY; boolean drawMouse,gameEnd; static boolean start=false; static int gridLength=15; static int round=0; int Mx,My,timer=1500; static int sqrSize=31; int towers=0,towerSelected=-10; static int castleHealth=2000; String levelPath; //choose the level Easy Med or Hard int maxEnemy[] = {5,7,12,20,30,15,50,30,40,60};//number of enimies per round int maxTowers=15;//maximum number of towers allowed static int money =2000,damPrice=600,ranPrice=350,towerPrice=700; //money = the intial ammount of money you start of with //damPrice is the price to increase the damage of a tower //ranPrice is the price to increase the range of a tower public void main(int cH,int mT,int mo,int dP,int rP,int tP,String path,int[] mE)//constructor 1 castleHealth=cH; maxTowers=mT; money=mo; damPrice=dP; ranPrice=rP; towerPrice=tP; String levelPath=path; maxEnemy = mE; buildLevel(); } public void main(int cH,int mT,String path)//basic constructor castleHealth=cH; maxTowers=mT; String levelPath=path; maxEnemy = mE; buildLevel(); } public void init(){ setSize(sqrSize*15+200,sqrSize*15);//set the size of the screen roadCol = new Color(255,216,0);//set the colors for the different objects startCol = new Color(0,38,255); finCol = new Color(255,0,0); selRoad = new Color(242,204,155);//selColor is the color of something when your mouse hovers over it selGrass = new Color(0,190,0); roadpic = getImage(getDocumentBase(),"images/road.jpg"); towerpic = getImage(getDocumentBase(),"images/tower.png"); backpic = getImage(getDocumentBase(),"images/grass.jpg"); levelPic = getImage(getDocumentBase(),"images/level.jpg"); e= new Enemy[maxEnemy.length][];//activates all of the enimies for (int r=0;r<e.length;r++) e[r] = new Enemy[maxEnemy[r]]; t= new Tower[maxTowers]; for (int i=0;i<t.length;i++) t[i]= new Tower();//activates all the towers for (int i=0;i<e.length; i++)//sets all of the enimies starting co ordinates for (int j=0;j<e[i].length;j++) e[i][j] = new Enemy(eStartX,eStartY,world); initButtons();//initialise all the buttons addMouseMotionListener(this); addMouseListener(this); }

    Read the article

  • glColor3f Setting colour

    - by Aaron
    This draws a white vertical line from 640 to 768 at x512: glDisable(GL_TEXTURE_2D); glBegin(GL_LINES); glColor3f((double)R/255,(double)G/255,(double)B/255); glVertex3f(SX, -SPosY, 0); // origin of the line glVertex3f(SX, -EPosY, 0); // ending point of the line glEnd(); glEnable(GL_TEXTURE_2D); This works, but after having a problem where it wouldn't draw it white (Or to any colour passed) I discovered that disabling GL_TEXTURE_2D Before drawing the line, and the re-enabling it afterwards for other things, fixed it. I want to know, is this a normal step a programmer might take? Or is it highly inefficient? I don't want to be causing any slow downs due to a mistake =) Thanks

    Read the article

  • how to do partial updates in OpenGL?

    - by Will
    It is general wisdom that you redraw the entire viewport on each frame. I would like to use partial updates; what are the various ways can do that, and what are their pros, cons and relative performance? (Using textures, FBOs, the accumulator buffer, any kind of scissors that can affect swapbuffers etc?) A scenario: a scene with a fair few thousand visible trees; although the textures are mipmapped and they are drawn via VBOs roughly front-to-back with so on, its still a lot of polys. Would streaming a single screen-sized texture be better than throwing them at the screen every frame? You'd have to redraw and recapture them only on camera movement or as often as your wind model updates or whatever, which need not be every frame.

    Read the article

  • Newton Game Dynamics: Making an object not affect another object

    - by Boreal
    I'm going to be using Newton in my networked action game with Mogre. There will be two "types" of physics object: global and local. Global objects will be kept in sync for everybody; these include the players, projectiles, and other gameplay-related objects. Local objects are purely for effect, like ragdolls, debris, and particles. Is there a way to make the global objects affect the local objects without actually getting affected themselves? I'd like debris to bounce off of a tank, but I don't want the tank to respond in any way.

    Read the article

  • Problem with Ogre::Camera lookAt function when target is directly below.

    - by PigBen
    I am trying to make a class which controls a camera. It's pretty basic right now, it looks like this: class HoveringCameraController { public: void init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height); void update(Ogre::Real time_delta); private: Ogre::Camera * camera_; AnimatedBody * target_; Ogre::Real height_; }; HoveringCameraController.cpp void HoveringCameraController::init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height) { camera_ = &camera; target_ = &target; height_ = height; update(0.0); } void HoveringCameraController::update(Ogre::Real time_delta) { auto position = target_->getPosition(); position.y += height_; camera_->setPosition(position); camera_->lookAt(target_->getPosition()); } AnimatedBody is just a class that encapsulates an entity, it's animations and a scene node. The getPosition function is simply forwarded to it's scene node. What I want(for now) is for the camera to simply follow the AnimatedBody overhead at the distance given(the height parameter), and look down at it. It follows the object around, but it doesn't look straight down, it's tilted quite a bit in the positive Z direction. Does anybody have any idea why it would do that? If I change this line: position.y += height_; to this: position.x += height_; or this: position.z += height_; it does exactly what I would expect. It follows the object from the side or front, and looks directly at it.

    Read the article

  • How to properly add texture to multi-fixture/shape b2Body

    - by Blazej Wdowikowski
    Hello to everyone this is my first poste here I hope that will be not fail start. At start I must say I make part 1 in Ray's Tutorial "How To Make A Game Like Fruit Ninja With Box2D and Cocos2D". But I wonder what when I want make more complex body with texture? Simple just add n b2FixtureDef to the same body. OK but what about texture? If I will take code from that tutorial it only fill last fixture. Probably it does not takes every b2Vec2 point. I was right, it did not. So quick refactor and from that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int vertexCount = shape->GetVertexCount(); NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; for(int i = 0; i < vertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width, _centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I came up with that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { int vertexCount = 0; //gather total number of b2Vect2 points b2Fixture *currentFixture = body->GetFixtureList(); while (currentFixture) { //new b2PolygonShape *shape = (b2PolygonShape*)currentFixture->GetShape(); vertexCount += shape->GetVertexCount(); currentFixture = currentFixture->GetNext(); } NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); while (originalFixture) { //new NSLog((NSString*)@"-"); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int currentVertexCount = shape->GetVertexCount(); for(int i = 0; i < currentVertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } originalFixture = originalFixture->GetNext(); } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width,_centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I was working for simple two fixtures body like b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position = position; bodyDef.angle = rotation; b2Body *body = world->CreateBody(&bodyDef); b2FixtureDef fixtureDef; fixtureDef.density = 1.0; fixtureDef.friction = 0.5; fixtureDef.restitution = 0.2; fixtureDef.filter.categoryBits = 0x0001; fixtureDef.filter.maskBits = 0x0001; b2Vec2 vertices[] = { b2Vec2(0.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(0.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(50.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(60.0/PTM_RATIO,60.0/PTM_RATIO) }; b2PolygonShape shape; shape.Set(vertices, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); b2Vec2 vertices2[] = { b2Vec2(20.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(20.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(70.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(80.0/PTM_RATIO,60.0/PTM_RATIO) }; shape.Set(vertices2, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); But if I try put secondary shape upper than first it starting wierd, texture goes crazy. For example not mention about more complex shapes. What's more if shapes have one common point texture will not render for them at all [For that I use Physics Edytor like in tutorial part1] BTW. I use PolygonSprite and in method createWithWorld... another shapes. Uff.. Question So my question is, why texture coords are in such a mess up? It's my modify method or just wrong approach? Maybe I should remove duplicated from points array?

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >