Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 557/1295 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Dynamic model interactions

    - by Richard
    I am just curious as to how in many games (namely games like arkham asylum/city, manhunt, hitman) do they make it so that your character can "grab" a character in front of you and do stuff to them. I know this may sound very confusing but for an example go to youtube and search "hitman executions", and the first video is an example of what i'm asking. Basically I'm wondering how they make your model dynamically interact with whatever other model you come across, so in hitman when you come up behind some one with the fibre wire you strangle the other character or if you have the anesthetic you come up behind some person and put your hand over there mouth while they struggle and slowly go to the floor where you lay them down. I am confused as to whether it was animated to use two models using specific bone/skeletal identifiers, if it is just two completely separate animations that are played at the correct time to make it look like they are actually interacting or something else all together. I am not an animator so i assume most of what i just said is not right but i hope that some one can understand what i mean and provide an answer. PS) I am a programmer and I am in the process of building a hitmanesque game, just because i love that style of game and I want to increase my skills on something fun, so if you do know what i'm talking about have some examples with involving both models and programming (i use c++ and mainly Ogre3D at the moment but i am getting into unity and XNA) i would greatly appreciate it. Thanks.

    Read the article

  • Managing text-maps in a 2D array on to be painted on HTML5 Canvas

    - by weka
    So, I'm making a HTML5 RPG just for fun. The map is a <canvas> (512px width, 352px height | 16 tiles across, 11 tiles top to bottom). I want to know if there's a more efficient way to paint the <canvas>. Here's how I have it right now. How tiles are loaded and painted on map The map is being painted by tiles (32x32) using the Image() piece. The image files are loaded through a simple for loop and put into an array called tiles[] to be PAINTED on using drawImage(). First, we load the tiles... and here's how it's being done: // SET UP THE & DRAW THE MAP TILES tiles = []; var loadedImagesCount = 0; for (x = 0; x <= NUM_OF_TILES; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "js/tiles/t" + x + ".png"; imageObj.onload = function () { console.log("Added tile ... " + loadedImagesCount); loadedImagesCount++; if (loadedImagesCount == NUM_OF_TILES) { // Onces all tiles are loaded ... // We paint the map for (y = 0; y <= 15; y++) { for (x = 0; x <= 10; x++) { theX = x * 32; theY = y * 32; context.drawImage(tiles[5], theY, theX, 32, 32); } } } }; tiles.push(imageObj); } Naturally, when a player starts a game it loads the map they last left off. But for here, it an all-grass map. Right now, the maps use 2D arrays. Here's an example map. [[4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 1, 1, 1, 1], [1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 11, 11, 11, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 1, 1, 1, 1, 1, 1, 1, 13, 13, 13, 13, 13, 1], [1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1, 1, 1]]; I get different maps using a simple if structure. Once the 2d array above is return, the corresponding number in each array will be painted according to Image() stored inside tile[]. Then drawImage() will occur and paint according to the x and y and times it by 32 to paint on the correct x-y coordinate. How multiple map switching occurs With my game, maps have five things to keep track of: currentID, leftID, rightID, upID, and bottomID. currentID: The current ID of the map you are on. leftID: What ID of currentID to load when you exit on the left of current map. rightID: What ID of currentID to load when you exit on the right of current map. downID: What ID of currentID to load when you exit on the bottom of current map. upID: What ID of currentID to load when you exit on the top of current map. Something to note: If either leftID, rightID, upID, or bottomID are NOT specific, that means they are a 0. That means they cannot leave that side of the map. It is merely an invisible blockade. So, once a person exits a side of the map, depending on where they exited... for example if they exited on the bottom, bottomID will the number of the map to load and thus be painted on the map. Here's a representational .GIF to help you better visualize: As you can see, sooner or later, with many maps I will be dealing with many IDs. And that can possibly get a little confusing and hectic. The obvious pros is that it load 176 tiles at a time, refresh a small 512x352 canvas, and handles one map at time. The con is that the MAP ids, when dealing with many maps, may get confusing at times. My question Is this an efficient way to store maps (given the usage of tiles), or is there a better way to handle maps? I was thinking along the lines of a giant map. The map-size is big and it's all one 2D array. The viewport, however, is still 512x352 pixels. Here's another .gif I made (for this question) to help visualize: Sorry if you cannot understand my English. Please ask anything you have trouble understanding. Hopefully, I made it clear. Thanks.

    Read the article

  • Car brands and models licensing

    - by Ju-v
    We are small team which working on car racing game but we don't know about licensing process for branded cars like Nissan, Lamborghini, Chevrolet and etc. Do we need to buy any licence for using real car brand names, models, logos,... or we can use them for free? Second option we think about using not real brand with real models is it possible? If someone have experience with that, fell free to share it. Any information about that is welcome.

    Read the article

  • Finding vectors with two points

    - by Christian Careaga
    We're are trying to get the direction of a projectile but we can't find out how For example: [1,1] will go SE [1,-1] will go NE [-1,-1] will go NW and [-1,1] will go SW we need an equation of some sort that will take the player pos and the mouse pos and find which direction the projectile needs to go. Here is where we are plugging in the vectors: def update(self): self.rect.x += self.vector[0] self.rect.y += self.vector[1] Then we are blitting the projectile at the rects coords.

    Read the article

  • lwjgl custom icon

    - by melchor629
    I have a little problem with the icon in lwjgl, it doesn't work. I google about it, but i haven't found anything that works for me yet. This is my code for now: PNGDecoder imageDecoder = new PNGDecoder(new FileInputStream("res/images/Icon.png")); ByteBuffer imageData = BufferUtils.createByteBuffer(4 * imageDecoder.getWidth() * imageDecoder.getHeight()); imageDecoder.decode(imageData, imageDecoder.getWidth() * 4, PNGDecoder.Format.RGBA); imageData.flip(); System.err.println(Display.setIcon(new ByteBuffer[]{imageData}) == 0 ? "No se ha creado el icono" : "Se ha creado el icono"); The png file is a 128x128px with transparency. PNGDecoder is from the matthiasmann utility (de.matthiasmann.twl.utils). I'm using Mac OS, 10.8.4 with lwjgl 2.9.0. Thanks :)

    Read the article

  • Can't use SFML sprite drawing and OpenGL rendering at the same time

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • My first flash game bot, in java

    - by Dylan
    Okay so i love coming up with new programming challenges and ive discovered a new challenge. I would love to create a bot for a game that requires the user to click on a character and drag the mouse like a slingshot. Upon releasing the mouse the character flys across the game and hopefully lands in a scored spot(in my bot the highest score). the game looks like this an image of the game is here. http://i.stack.imgur.com/fThnG.jpg How would i go about calculating the location of the character and then the physics to know exactly where to drag the mouse to?

    Read the article

  • My GLSL shader isn't compiling even though it should. What should I investigate?

    - by reapz
    I'm porting an iOS game to Android. One of the shaders I'm using wouldn't compile until I reduced the number of uniform variables. Here are the uniform definitions: uniform highp mat4 ViewProjMatrix; uniform mediump vec3 LightDirWorld; uniform mediump int BoneCount; uniform highp mat4 BoneMatrixArray[8]; uniform highp mat3 BoneMatrixArrayIT[8]; uniform mediump int LightCount; uniform mediump vec3 LightPos[4]; // This used to be 12, but now 4, next lines also uniform lowp vec3 LightColour[4]; uniform mediump vec3 LightInnerOuterFalloff[4]; My issue is that the GLSL shader wouldn't compile until I reduced the count of the above arrays from 12 to 4. My understanding is that if those 3 lines were arrays of 12 then I would be using 56 vertex uniform vectors. I query the system at startup (GL_MAX_VERTEX_UNIFORM_VECTORS) and it says that 128 are available. Why wouldn't it compile with 56? I'm having issues on the Kindle Fire.

    Read the article

  • HLSL What you get when you subtract world position from InvertViewProjection.Transform?

    - by cubrman
    In one of NVIDIA's Vertex shaders (the metal one) I found the following code: // transform object normals, tangents, & binormals to world-space: float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >; ... float4 Po = float4(IN.Position.xyz,1); // homogeneous location coordinates float4 Pw = mul(Po,WorldXf); // convert to "world" space OUT.WorldView = normalize(ViewIXf[3].xyz - Pw.xyz); The term OUT.WorldView is subsequently used in a Pixel Shader to compute lighting: float3 Ln = normalize(IN.LightVec.xyz); float3 Nn = normalize(IN.WorldNormal); float3 Vn = normalize(IN.WorldView); float3 Hn = normalize(Vn + Ln); float4 litV = lit(dot(Ln,Nn),dot(Hn,Nn),SpecExpon); DiffuseContrib = litV.y * Kd * LightColor + AmbiColor; SpecularContrib = litV.z * LightColor; Can anyone tell me what exactly is WorldView here? And why do they add it to the normal?

    Read the article

  • Using "screenshots" in a game, is it allowed?

    - by DevilWithin
    Lets say I have a game that is some kind of a quiz, and its questions are themed around gaming. For it to be interesting, I would need to make references to well-known games and game-related stuff. In a copyright infrigement sense, could I have problems with this? Imagine a question such as, "What was the currency used in game X?", or "Which company made game Y?". Also, the same applied to screenshots of known games, and have a question near it, such as "What game is this image from?". Toughts? Thanks

    Read the article

  • How to had operation with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can had a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can had some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have try to set binary flags to states, and use AND to set operation to combine different states, checking before if is possible or not to do it, or change to another status. Exist a concrete patron to solve this problem efficiently without had a interminable switch that check every states with everynew states? It is relative easy to check 2 different states, but if exist a third state it is not trivial to do.

    Read the article

  • How to set density for each shape in PhysX 3.1

    - by hywei
    I'm using PhysX 3.1 as my game's physics engine. One requirement is that I need set different density for each shape(there are server shapes for my single rigid actor). I know that the shape's density can be set by NxShapeDesc::density in PhysX 2.8, but I can't find such interface in PhysX 3.1. I know that the mass properties can be set in PhysX 3.1 just as the snowman example in the SDK, but I don't know whether there exists a direct interface to set density for each shape.

    Read the article

  • Collision detection with non-rectangular images

    - by Adam Smith
    I'm creating a game and I need to detect collisions between a character and some parts of the environment. Since my character's frames are taken from a sprite sheet with a transparent background, I'm wondering how I should go about detecting collisions between a wall and my character only if the colliding parts are non-transparent in both images. I thought about checking only if part of the rectangle the character is in touches the rectangle a tile is in and comparing the alpha channels, but then I have another choice to make... Either I test every single pixel against every single pixel in the other image and if one is true, I detect a collision. That would be terribly ineficient. The other option would be to keep a x,y position of the leftmost, rightmost, etc. non-transparent pixel of each image and compare those instead. The problem with this one might be that, for instance, the character's hand could be above a tile (so it would be in a transparent zone of the tile) but a pixel that is not the rightmost could touch part of the tile without being detected. Another problem would be that in different frames, the rightmost, leftmost, etc. pixels might not be at the same position. Should I not bother with that and just check the collisions on the rectangles? It would be simpler, but I'm afraid people.will feel that there are collisions sometimes that shouldn't happen.

    Read the article

  • Is there a size limit when using UICollectionView as tiled map for iOS game?

    - by Alexander Winn
    I'm working on a turn-based strategy game for iOS, (picture Civilization 2 as an example), and I'm considering using a UICollectionView as my game map. Each cell would be a tile, and I could use the "didSelectCell" method to handle player interaction with each tile. Here's my question: I know that UICollectionViewCells are dequeued and reused by the OS, so does that mean that the map could support an effectively infinitely-large map, so long as only a few cells are onscreen at a time? However many cells were onscreen would be held in memory, and obviously the data source would take up some memory, but would my offscreen map be limited to a certain size or could it be enormous so long as the number of cells visible at any one time wasn't too much for the device to handle? Basically, is there any memory weight to offscreen cells, or do only visible cells have any impact?

    Read the article

  • "Super meatboy"-ish replay

    - by Ron
    I'm making a platformer built from mini-levels - and I want to create a sort of a replay of all the player tries that the player did for the level. My question is - what is the best way to record the player's actions in-game, so that I could replay them later when he finishes the level. I thought about recording only the player's input and replay them later on, each on a clone of the player. The problem I have with this is with dynamic obstacles (that could be moved around) - if one clone moves them, it throws the simulation off for the rest of the clones. So then I thought about recording every frame the X/Y of the player, and then just replay it - but that seems it could cause a major memory leak and very ineffective. So - does anyone have any ideas? :)

    Read the article

  • How can Highscores be more meaningful and engaging?

    - by Anselm Eickhoff
    I'm developing a casual Android game in which the player's success can very easily be represented by a number (I'm not more specific because I'm interested in the topic in general). Although I myself am not a highscore person at all, I was thinking of implementing a highscore for that game, but I see at least 2 problems in the classical leaderboard approach: very soon the highscore will be dominated by hardcore players, leaving no chance for beginners, who are then frustrated. This is very severe especially in casual games. there is no direct reward for being a loyal player who plays the game over and over again My current idea is to "reset" the highscore every 24 hours (for example) and each day nominate the "player of the day" who then gets a "star". Then there would be some kind of meta-highscore of players with the most stars. That way even beginners might have a chance to be "player of the day" once and continued or repeated play is rewarded much more. The idea is still very rough and there are many problems in the details and the technical implementation but I have a feeling it is a step in the right direction. Do you have creative and new ideas on how to implement highscores? Which games are doing this well / what types of highscores do you find most engaging?

    Read the article

  • How does this snippet of code create a ray direction vector?

    - by Isaac Waller
    In the Minecraft source code, this code is used to create a direction vector for a ray from pitch and yaw:' float f1 = MathHelper.cos(-rotationYaw * 0.01745329F - 3.141593F); float f3 = MathHelper.sin(-rotationYaw * 0.01745329F - 3.141593F); float f5 = -MathHelper.cos(-rotationPitch * 0.01745329F); float f7 = MathHelper.sin(-rotationPitch * 0.01745329F); return Vec3D.createVector(f3 * f5, f7, f1 * f5); I was wondering how it worked, and what is the constant 0.01745329F?

    Read the article

  • GPU based procedual terrain borders?

    - by OnePie
    I'm working on a game that preferibly should feature a combination of designed and procedually generated terrain where the designer specifies in somewhat detailed terms what type of terrain a given area will have (grasslands, forest etc...) and then a precedual algorithm takes care of the rest. I'm not talking about minecraft style biomoes, but rather the game map for a strategy game. Each 'area' will not take up that much of the screen, and thus be more akin to a tile whose texture is procedually generated. While procedually generating terrain textures on the GPU are not that difficult, the hard part is making the borders between them look good. Currently, the 'tiles' are large enough to be visible (due to memory constraints mainly, we are talking planetary sized textures for a game taking place in space and on a continental ground view with seamless transitions between them) and creating good borders between them with an algorithm that is fast enough to be useful has proven difficult. Sampling the n-surrounding pixels and using the combiened result did not yield very good borders and was fairly slow on the GPU to boot (ca 12ms for me, that is without any lighning or shading and with very simple terrain texture shaders). So are there any practical known methods to solve this problem?

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • Periodic updates of an object in Unity

    - by Blue
    I'm trying to make a collider appear every 1 second. But I can't get the code right. I tried enabling the collider in the Update function and putting a yield to make it update every second or so. But it's not working (it gives me an error: Update() cannot be a coroutine.) How would I fix this? Would I need a timer system to toggle the collider? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • iOS Game that Runs Continuously in Background

    - by user2913669
    I'm trying to understand the most logical way of creating an iOS game that runs continuously in the background. For example.. you have tower and enemy waves. The game has endless enemy waves even when the game exits. When you open the game again, it will retrieve the data that occurred when the app was closed. I assume a database on a server would be the best solution. The values continuously increment on the server. The game connects to the server and retrieves the specific user's updated game data.

    Read the article

  • OpenGL Drawing textured model (OBJ) black texture

    - by andrepcg
    I'm using OpenGL, Glew, GLFW and Glut to create a simple game. I've been following some tutorials and I have now a good model importer with textures (from ogldev.atspace.co.uk) but I'm having an issue with the model textures. I have a skybox with a beautiful texture as you can see in the picture That weird texture behind the helicopter (model) is the heli model that I've applied on purpose to that wall to demonstrate that specific texture is working, but not on the helicopter. I'll include the files I'm working on so you can check it out. Mesh.cpp - http://pastebin.com/pxDuKyQa Texture.cpp - http://pastebin.com/AByWjwL6 Render function + skybox - http://pastebin.com/Vivc9qnT I'm just calling mesh->Render(); before the drawSkyBox function, in the render loop. Why is the heli black when I can perfectly apply its texture to another quad? I've debugged the code and the mesh-render() call is correctly fetching the texture number and passing it to the texture-bind() function.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >