Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 529/1414 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Google App Engine 1.3.1 <admin-console> Issue

    - by Taylor L
    I attempted to add an <admin-console> section to my appengine-web.xml and I got the exception below. The <admin-console> element is a valid element according to the appengine-web.xsd. It's also documented in the app engine docs. Any ideas as to what is wrong? <admin-console> <page name="My Admin" url="/app/admin" /> </admin-console> Feb 14, 2010 12:40:09 AM com.google.apphosting.utils.config.AppEngineWebXmlReader readAppEngineWebXml SEVERE: Received exception processing C:/development/taylor/myapp/target/myapp-web-0.0.1-SNAPSHOT\WEB-INF/appengine-web.xml com.google.apphosting.utils.config.AppEngineConfigException: Unrecognized element <admin-console> at com.google.apphosting.utils.config.AppEngineWebXmlProcessor.processSecondLevelNode(AppEngineWebXmlProcessor.java:99) at com.google.apphosting.utils.config.AppEngineWebXmlProcessor.processXml(AppEngineWebXmlProcessor.java:46) at com.google.apphosting.utils.config.AppEngineWebXmlReader.processXml(AppEngineWebXmlReader.java:94) at com.google.apphosting.utils.config.AppEngineWebXmlReader.readAppEngineWebXml(AppEngineWebXmlReader.java:61) at com.google.appengine.tools.admin.Application.<init>(Application.java:88) at com.google.appengine.tools.admin.Application.readApplication(Application.java:120) at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:107) at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:58) at com.google.appengine.tools.admin.AppCfg.main(AppCfg.java:54) at net.kindleit.gae.EngineGoalBase.runAppCfg(EngineGoalBase.java:140) at net.kindleit.gae.DeployGoal.execute(DeployGoal.java:38) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:579) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:498) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmentForProject(DefaultLifecycleExecutor.java:265) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:191) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:149) at org.apache.maven.DefaultMaven.execute_aroundBody0(DefaultMaven.java:223) at org.apache.maven.DefaultMaven.execute_aroundBody1$advice(DefaultMaven.java:304) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:1) at org.apache.maven.embedder.MavenEmbedder.execute_aroundBody2(MavenEmbedder.java:904) at org.apache.maven.embedder.MavenEmbedder.execute_aroundBody3$advice(MavenEmbedder.java:304) at org.apache.maven.embedder.MavenEmbedder.execute(MavenEmbedder.java:1) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:176) at org.apache.maven.cli.MavenCli.main(MavenCli.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:408) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:351) at org.codehaus.classworlds.Launcher.main(Launcher.java:31)

    Read the article

  • How to update entity states and animations in a component-based game

    - by mivic
    I'm trying to design a component-based entity system for learning purposes (and later use on some games) and I'm having some troubles when it comes to updating entity states. I don't want to have an update() method inside the Component to prevent dependencies between Components. What I currently have in mind is that components hold data and systems update components. So, if I have a simple 2D game with some entities (e.g. player, enemy1, enemy 2) that have Transform, Movement, State, Animation and Rendering components I think I should have: A MovementSystem that moves all the Movement components and updates the State components And a RenderSystem that updates the Animation components (the animation component should have one animation (i.e. a set of frames/textures) for each state and updating it means selecting the animation corresponding to the current state (e.g. jumping, moving_left, etc), and updating the frame index). Then, the RenderSystem updates the Render components with the texture corresponding to the current frame of each entity's Animation and renders everything on screen. I've seen some implementations like Artemis framework, but I don't know how to solve this situation: Let's say that my game has the following entities. Each entity have a set of states and one animation for each state: player: "idle", "moving_right", "jumping" enemy1: "moving_up", "moving_down" enemy2: "moving_left", "moving_right" What are the most accepted approaches in order to update the current state of each entity? The only thing that I can think of is having separate systems for each group of entities and separate State and Animation components so I would have PlayerState, PlayerAnimation, Enemy1State, Enemy1Animation... PlayerMovementSystem, PlayerRenderingSystem... but I think this is a bad solution and breaks the purpose of having a component-based system. As you can see, I'm quite lost here, so I'd very much appreciate any help.

    Read the article

  • How can I make an infinite cave using stage3d?

    - by ifree
    I want to make an infinite cave in my 3d game using flash stage3d. But I got no idea about how to build that cave. Can anyone can give me some solution or hint? update: I've tried agal fragment shader like squeae tunnel in shader toy code: var fragmentProgramCode:String = AGALUtils.build() .mov("ft0","v0") .div("ft1","ft0.xy","fc3.xy") .mul("ft2","fc6.x","ft1") .sub("ft3","ft2","fc5.x")//vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy; .mul("ft1","ft3.x","ft3.x") .mul("ft2","ft3.y","ft3.y") .pow("ft4","ft1","fc6.z")//float r = pow( pow(p.x*p.x,16.0) + pow(p.y*p.y,16.0), 1.0/32.0 ); .pow("ft5","ft2","fc6.z") .add("ft1","ft4","ft5") .pow("ft4","ft1","fc6.w") .mov("ft5","fc5")//uv .sub("ft1","fc7.x","ft4") .add("ft5.x","fc7.x","ft1")//uv.x = .5*time + 0.5/r; .mov("ft6","fc0")//for atan .atan2("ft5.y","ft3.y","ft3.x",new <String>["fc7.y","fc5.x","fc7.z","fc7.w","fc8.x","fc8.y","fc8.z","fc8.w","fc9.x","fc9.y"],"ft6") .tex("ft0","ft5","fs0","repeat","linear","nomip")//tex .mul("ft1","ft4","ft4") .mul("ft2","ft1","ft4")//r*r*r .mul("ft1","ft0.xyz","ft2") .mov("ft0.w","fc5.x") .mov("oc","ft1").toString() it can only apply one material,but my project requires different types of material (like floor,ceilling). so ,I create a 3d model Is there anyway to make that 3d model render like "infinity cave"? use agal to make each side of cave's texture move? thanks for your help

    Read the article

  • Automatically zoom out the camera to show all players (XNA)

    - by user36159
    I am building a game in XNA that takes place in a rectangular arena. The game is multiplayer and each player may go where they like within the arena. The camera is a persepective camera that looks directly downwards. The camera should be automatically repositioned based on the game state. Currently, the xy position is a weighted sum of the xy positions of important entities. I would like the camera's z position to be calculated from the xy coordinates so that it zooms out to the point where all important entities are visible. My current approach is to: hw = the greatest x distance from the camera to an important entity hh = the greatest y distance from the camera to an important entity Calculate z = max(hw / tan(FoVx), hh / tan(FoVy)) My code seems to almost work as it should, but the resulting z values are always too low by a factor of about 4. Any ideas?

    Read the article

  • Texturize a shape of multiple triangles in 2D

    - by Deukalion
    This is an example of a shape consisting of multiple points, triangles and eventually a shape: Red Dots = Vector3 (X, Y, Z) or Vector2 (X, Y) If I have a Texture of a certain size, how do I texturize this area in the best way so that the texture inside the shape matches the shape and does not overlap anywhere? Perhaps also with a chance to scale the texture in case it's too small or to big for the shape, but still so that it gets rendered correctly. Do I treat the shape as a rectangle? Figure out it's 4 corners? Or do I calculate the distance between Center - (Texture Width / 2) and Point (to see how "many" times the texture can fit between on that axis to estimate at what Coordinates the Texture should be at that certain point? I've looked at Texture Mapping but haven't found any concrete examples that it explains it well, it's also confusing with 0.0-1.0 values for Texture Coordinates.

    Read the article

  • OpenGL ES 2 on Android: native window

    - by ThreaderSlash
    According to OGLES specification, we have the following definition: EGLSurface eglCreateWindowSurface(EGLDisplay display, EGLConfig config, NativeWindowType native_window, EGLint const * attrib_list) More details, here: http://www.khronos.org/opengles/documentation/opengles1_0/html/eglCreateWindowSurface.html And also by definition: int32_t ANativeWindow_setBuffersGeometry(ANativeWindow* window, int32_t width, int32_t height, int32_t format); More details, here: http://mobilepearls.com/labs/native-android-api I am running Android Native App on OGLES 2 and debugging it in a Samsung Nexus device. For setting up the 3D scene graph environment, the following variables are defined: struct android_app { ... ANativeWindow* window; }; android_app* mApplication; ... mApplication=&pApplication; And to initialize the App, we run the commands in the code: ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); Funny to say is that, the command ANativeWindow_setBuffersGeometry behaves as expected and works fine according to its definition, accepting all the parameters sent to it. But the eglCreateWindowSurface does no accept the parameter mApplication-window, as it should accept according to its definition. Instead, it looks for the following input: EGLNativeWindowType hWnd; mSurface = eglCreateWindowSurface(mDisplay,lConfig,hWnd,NULL); As an alternative, I considered to use instead: NativeWindowType hWnd=android_createDisplaySurface(); But debugger says: Function 'android_createDisplaySurface' could not be resolved Is 'android_createDisplaySurface' compatible only for OGLES 1 and not for OGLES 2? Can someone tell if there is a way to convert mApplication-window? In a way that the data from the android_app get accepted to the window surface?

    Read the article

  • How to effectively gather info about how players play my HTML5 game?

    - by Bane
    I'm finishing another HTML5 game, and this time I'd like to do some spying business on the players... Mostly just basic stuff: when they are playing, for how long, what upgrades they are buying the most and so on. Now, my first idea was just to collect this information during the gameplay, and then have a Javascript function fire when they close the tab/browser, and said function would send it to my server via Socket.io. This, of course, wouldn't work, because anyone who takes a look at the code would realize it and could start sending a tonne of false info which would mess up my statistics. Questions: Is there a way to effectively do this? If yes, what kind of info should I be looking for, aside from stuff I already mentioned?

    Read the article

  • Using copyrighted sprites

    - by Zertalx
    I was thinking about making a pacman clone, I know there is a similar question here Using Copyrighted Images , but I know i can't use the original art from the game because it belongs to Namco, so if I design a character that has the shape of the slice circle it will look exactly like pacman, maybe if I use green instead of yellow? Also if the game plays like the original pacman, it is wrong? I just want to make the game as a personal project and and publish it in my site without getting in trouble

    Read the article

  • How can I resolve component types in a way that supports adding new types relatively easily?

    - by John
    I am trying to build an Entity Component System for an interactive application developed using C++ and OpenGL. My question is quite simple. In my GameObject class I have a collection of Components. I can add and retrieve components. class GameObject: public Object { public: GameObject(std::string objectName); ~GameObject(void); Component * AddComponent(std::string name); Component * AddComponent(Component componentType); Component * GetComponent (std::string TypeName); Component * GetComponent (<Component Type Here>); private: std::map<std::string,Component*> m_components; }; I will have a collection of components that inherit from the base Components class. So if I have a meshRenderer component and would like to do the following GameObject * warship = new GameObject("myLovelyWarship"); MeshRenderer * meshRenderer = warship->AddComponent(MeshRenderer); or possibly MeshRenderer * meshRenderer = warship->AddComponent("MeshRenderer"); I could be make a Component Factory like this: class ComponentFactory { public: static Component * CreateComponent(const std::string &compTyp) { if(compTyp == "MeshRenderer") return new MeshRenderer; if(compTyp == "Collider") return new Collider; return NULL; } }; However, I feel like I should not have to keep updating the Component Factory every time I want to create a new custom Component but it is an option. Is there a more proper way to add and retrieve these components? Is standard templates another solution?

    Read the article

  • Arbitrary projection matrix from 6 arbitrary frustum planes

    - by Doub
    A projection matrix represent a tranformation from the camera view space to the rendering system clip space. In other words, it defines the transormation between a 6-sided frustum to the clip cube. The glOrtho and glFrustum use only 6 parameter to define such a projection, but impose several constraints on the frustum that will get projected to the clip cube: the near and far planes are parallel, the left and right planes intersect on a vertical line, and the top and bottom planes intersect on a horizontal lines, both lines being parallel to the near and far planes. I'd like to lift these restrictions. So, from the definition of the 6 frustum side planes (in whatever representation you see fit), how can I compute a general projection matrix?

    Read the article

  • How should I generate and store the boundries of a cave?

    - by Bob Roberts
    I am making a small cave copter game (seriously, where did this type of game come from anyway) and I am trying to figure out how to make and store the procedural generated walls. I am thinking about creating the walls by randomly picking two points away from the center of the screen. They will be no closer than the height of helicopter and no further than the edge of the screen, weighted to prefer to go in the same direction as the point prior so I end up with stalactites and stalagmites and not just noise, at set intervals of distance. To store, perhaps parallel arrays/lists, one for distance from center to top screen and one for distance from center to bottom. Am I way off base with my thinking? I just want the cave to be varied and challenging, I just have never worked with generating data like this. Edit: Woah, I just realized that my idea would lead to a player being able to stay in the middle of the screen and win. That isn't right at all. So the very basis of how I was going to generate is wrong. Edit 2: I also realized I left out a very crucial point. Part of the mechanics of the game will let the player go backwards therefor the data structure should be continuous.

    Read the article

  • How do I get the child of a unique parent in ActionScript?

    - by Koen
    My question is about targeting a child with a unique parent. For example. Let's say I have a box people can move called box_mc and 3 platforms it can jump on called: Platform_1 Platform_2 Platform_3 All of these platforms have a child element called hit. Platform_1 Hit Platform_2 Hit Platform_3 Hit I use an array and a for each statement to detect if box_mc hits one of the platforms childs. var obj_arr:Array = [Platform_1, Platform_2, Platform_3]; for each(obj in obj_arr){ if(box_mc.hitTestObject(obj.hit)){ trace(obj + " " + obj.hit); box_mc.y = obj.hit.y - box_mc.height; } } obj seems to output the unique parent it is hitting but obj.hit ouputs hit, so my theory is that it is applying the change of y to all the childs called hit in the stage. Would it be possible to only detect the child of that specific parent?

    Read the article

  • Get coordinates of arraylist

    - by opiop65
    Here's my map class: public class map{ public static final int CLEAR = 0; public static final ArrayList<Integer> STONE = new ArrayList<Integer>(); public static final int GRASS = 2; public static final int DIRT = 3; public static final int WIDTH = 32; public static final int HEIGHT = 24; public static final int TILE_SIZE = 25; // static int[][] map = new int[WIDTH][HEIGHT]; ArrayList<ArrayList<Integer>> map = new ArrayList<ArrayList<Integer>>(WIDTH * HEIGHT); enum tiles { air, grass, stone, dirt } Image air, grass, stone, dirt; Random rand = new Random(); public Map() { /* default map */ /*for(int y = 0; y < WIDTH; y++){ map[y][y] = (rand.nextInt(2)); System.out.println(map[y][y]); }*/ /*for (int y = 18; y < HEIGHT; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = STONE; } } for (int y = 18; y < 19; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = GRASS; } } for (int y = 19; y < 20; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = DIRT; } }*/ for (int y = 0; y < HEIGHT; y++) { for(int x = 0; x < WIDTH; x++){ map.set(x * WIDTH + y, STONE); } } try { init(null, null); } catch (SlickException e) { e.printStackTrace(); } render(null, null, null); } public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { air = new Image("res/air.png"); grass = new Image("res/grass.png"); stone = new Image("res/stone.png"); dirt = new Image("res/dirt.png"); } public void render(GameContainer gc, StateBasedGame sbg, Graphics g) { for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y)) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } } public static boolean blocked(float x, float y) { return map[(int) x][(int) y] == STONE; } public static Rectangle blockBounds(int x, int y) { return (new Rectangle(x, y, TILE_SIZE, TILE_SIZE)); } } Specifically I am looking at this: for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y).intValue()) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } How can I access the coordinates of my arraylist map and then draw the tiles to the screen? Thanks!

    Read the article

  • Event Driven Communication in Game Engine - Yes or No?

    - by Bunkai.Satori
    As I am reading book Game Coding Complete (http://www.amazon.com/Game-Coding-Complete-Third-McShaffry/dp/1584506806/ref=sr_1_1?ie=UTF8&qid=1295978774&sr=8-1), the author recommend Event Driven communication among the all game objects and modules. Basicaly, all the living game actors and object should communicate with the key modules (Physics, AI, Game Logic, Game View, etc..) via internal event messaging system. This would mean designing efficient event manager as well. My question is, whether this is proven and recommended approach. If it is not properly designed, it might mean consuming a lot of CPU cycles, which can be used elsewhere. This is especially true, if the game is targetted for mobile platform. What is your opinion and recommendation, please?

    Read the article

  • Are there any good html 5 mmo design tutorials? [on hold]

    - by Dwight Spencer
    Hey all. I got a rather inspired after playing gaia online's zOMG and wanted to revive an old project idea I've had laying around for a few years now. I'm looking to work with html5 (ie canvas, svg based sprites, & WebGL) to build a graphical web based MUD/MMO. Obviously, this is a new take on an old idea and after searching google I haven't really turned up many good resources. But does anyone have any tutorials or other resources to point me in the right direction?

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Collision detection code style

    - by Marian Ivanov
    Not only there are two useful broad-phase algorithms and a lot of useful narrowphase algorithms, there are also multiple code styles. Arrays vs. calling Make an array of broadphase checks, then filter them with narrowphase checks, then resolve them. function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ possibleCollisions = getPossibleCollisions(b,a->get(index)); for(i=0; i<possibleCollitionsNumber; i++){ if(narrowphase(possibleCollisions[i],a[index])) { collisions->push(possibleCollisions[i]); }; }; for(i=0; i<collitionsNumber; i++){ //CODE FOR RESOLUTION }; }; Make the broadphase call the narrowphase, and the narrowphase call the resolution function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ broadphase(b,a->get(index)); }; function broadphase(thingy * with, thingy * what){ while(blah){ //blahcode narrowphase(what,collidingThing); }; }; Events vs. in-the-loop Fire an event. This abstracts the check away, but it's trickier to make an equal interaction. a[index] -> collisionEvent(eventdata); //much later int collisionEvent(eventdata){ //resolution gets here } Resolve the collision inside the loop. This glues narrowphase and resolution into one layer. if(narrowphase(possibleCollisions[i],a[index])) { //CODE GOES HERE }; The questions are: Which of the first two is better, and how am I supposed to make a zero-sum Newtonian interaction under B1.

    Read the article

  • What happened to .fx files in D3D11?

    - by bobobobo
    It seems they completely ruined .fx file loading / parsing in D3D11. In D3D9, loading an entire effect file was D3DXCreateEffectFromFile( .. ), and you got a ID3DXEffect9, which had great methods like SetTechnique and BeginPass, making it easy to load and execute a shader with multiple techniques. Is this completely manual now in D3D11? The highest level functionality I can find is loading a SINGLE shader from an FX file using D3DX11CompileFromFile. Does anyone know if there's an easier way to load FX files and choose a technique? With the level of functionality provided in D3D11 now, it seems like you're better off just writing .hlsl files and forgetting about the whole idea of Techniques.

    Read the article

  • 2D shader to draw representation of rotating sphere.

    - by TheBigO
    I want to display a 3D textured sphere, and then rotate it in one direction. The direction will never change, and the camera will never move. One way is to actually create a spherical mesh, map a texture to it, rotate the sphere, and render in 3D. My question is, is there a way to display a 2D circle, that looks like a rotating sphere, with just a 2D shader. In other words, can someone think of a trick, like mapping a texture to the circle in a particular way, to give the appearance of an in-place rotating sphere, that is always viewed from the side? I don't need exact shader code, I'm just looking for the right idea.

    Read the article

  • Game engine design: Multiplayer and listen servers

    - by jarx
    My game engine right now consists of a working singleplayer part. I'm now starting to think about how to do the multiplayer part. I have found out that many games actually don't have a real singleplayer mode, but when playing alone you are actually hosting a local server as well, and almost everything runs as if you were in multiplayer (except that the data packets can be passed over an alternate route for better performance) My engine would need major refactoring to adapt to this model. There would be three possible modes: Dedicated client, Dedicated server and Client-Server (listen mode) * How often is the listen-server model used in the gaming industry? * What are the (dis)advantages of it? * What other options do I have?

    Read the article

  • OpenGL Learning Material (that's up to date)

    - by Sauron
    So im sure there are topics on this, but alot of them list older material. And the last book: http://www.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617/ref=sr_1_1?ie=UTF8&qid=1346116133&sr=8-1&keywords=opengl REALLY REALLY disappointed me. I DO NOT want to use someone else's library to learn this stuff, that bothers me SOOO much. So I was hoping there was a newer book that goes into detail, and doesn't use some sort of library "Hiding" everything from you. Or should I just look at older material? If so....anything thats not "too" out of date. Terrain tutorials are a plus (that's kinda my "goal"). Thanks

    Read the article

  • Changing State in PlayerControler from PlayerInput

    - by Jeremy Talus
    In my player input I wanna change the the "State" of my player controller but I have some trouble to do it my player input is declared like that : class ResistancePlayerInput extends PlayerInput within ResistancePlayerController config(ResistancePlayerInput); and in my playerControler I have that : class ResistancePlayerController extends GamePlayerController; var name PreviousState; DefaultProperties { CameraClass = class 'ResistanceCamera' //Telling the player controller to use your custom camera script InputClass = class'ResistanceGame.ResistancePlayerInput' DefaultFOV = 90.f //Telling the player controller what the default field of view (FOV) should be } simulated event PostBeginPlay() { Super.PostBeginPlay(); } auto state Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 200; `log("Player Walking"); } } state Running extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 350; `log("Player Running"); } } state Sprinting extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 800; `log("Player Sprinting"); } } I have tried to use PCOwner.GotoState(); and ResistancePlayerController(PCOwner).GotoState(); but won't work. I have also tried a simple GotoState, and nothing happen how can I call GotoState for the PC Class from my player input ?

    Read the article

  • Direct2D Transform

    - by James
    I have a beginner question about Direct2D transforms. I have a 20 x 10 bitmap that I would like to draw in different orientations. To start, I would like to draw it vertically with a destination rectangle of say: (left, top, right, bottom) (300, 300, 310, 320) The bitmap is wider than it is tall (20 x 10), but when I draw it vertically, it will be appear taller than it is wide (10 x 20). I know that I can use a rotation matrix like so: m_pRenderTarget->SetTransform( D2D1::Matrix3x2F::Rotation( 90.0f, D2D1::Point2F(<center of shape>)) ); But when I use this method to rotate my shape, the destination rectangle is still wider than it is tall. Maybe it would look something like this: (left, top, right, bottom) (280, 290, 300, 300) The destination rectangle is 20 x 10 but the bitmap appears on the screen as 10 x 20. I can't look at the destination rectangle in the debugger and compare it to: (left, top, right, bottom) (300, 300, 310, 320) Is there any simple way to say "I want to rotate it so that the image is rendered to exactly this destination rectangle after the rotation?" In this case, I would like to say "Please rotate the bitmap so that it appears on the screen at this location:" (left, top, right, bottom) (300, 300, 310, 320) If I can't do that, is there any way to find out the 10 x 20 destination rectangle where the bitmap is actually being rendered to the screen?

    Read the article

  • Matrix.CreateBillboard centre rotation problem

    - by Chris88
    I'm having an issue with Matrix.CreateBillboard and a textured Quad where the center axis seems to be positioned incorrectly to the quad object which is rotating around a center point: Using: BasicEffect quadEffect; Drawing the quad shape: Left = Vector3.Cross(Normal, Up); Vector3 uppercenter = (Up * height / 2) + origin; LowerLeft = uppercenter + (Left * width / 2); LowerRight = uppercenter - (Left * width / 2); UpperLeft = LowerLeft - (Up * height); UpperRight = LowerRight - (Up * height); Where height and width are float values passed in (it draws a square) Draw method: quadEffect.View = camera.view; quadEffect.Projection = camera.projection; quadEffect.World = Matrix.CreateBillboard(Origin, camera.cameraPosition, Vector3.Up, camera.cameraDirection); GraphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserIndexedPrimitives <VertexPositionNormalTexture>( PrimitiveType.TriangleList, Vertices, 0, 4, Indexes, 0, 2); } GraphicsDevice.BlendState = BlendState.Opaque; In the screenshots below i draw the image at Vector3(32f, 0f, 32f) The screenshots below show you the position of the quad in relation to the red cross. The red cross shows where it should be drawn http://i.imgur.com/YwRYj.jpg http://i.imgur.com/ZtoHL.jpg It rotates around the red cross position

    Read the article

  • How do display a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >