Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 606/1295 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • Changing the rendering resolution while maintaining the design layout

    - by Coyote
    I would like to increase the FPS of my project. Currently I would like to try reducing the resolution at which the scenes are rendered. Let's say I never want to draw more than 1280*720. What ever the real resolution is. How should I proceed? I tried pEGLView->setFrameSize(1280, 720); but only reduces the displayed size of the frame on screen (boxing). In my activity I tried setting the size of the "surface" but this seems to completely break the layout (as defined by setDesignResolutionSize). @Override public Cocos2dxGLSurfaceView onCreateView() { Cocos2dxGLSurfaceView surfaceView = new Cocos2dxGLSurfaceView(this); surfaceView.getHolder().setFixedSize(1280, 720); return surfaceView; } Is there a way to simply change the rendered

    Read the article

  • box2d and constant movement

    - by Arnas
    i'm developing a game with a top down view, the players body is a circle. To move the character you need to tap on the screen and it moves to the spot. To achieve this i'm saving the coordinate of the touch and call a method every frame which applies linear velocity to the body with a vector of the direction the body should go _body->SetLinearVelocity(b2Vec2((a.x - currPos.x)/SPEED_RATIO,(size.height - a.y - currPos.y)/SPEED_RATIO)); //click position - current position, screen height - click position (since the y axis is flipped, (0,0) is in the bottom left ) - current position = vector of the direction we want to go now the problem with this is that the body slows down until it finally stops when getting closer to the point we want it to go, since the closer we are to that point the lenght of the vector gets smaller. Besides that i've read that it's bad practice to set linear velocity in box2d and i should use apply force instead, but that way the forces would add up and overshoot the target where it's supposed to stop. So what i'm asking is how to move a box2d body to a coordinate in constant speed.

    Read the article

  • Are VM-based languages becoming viable for Graphics since the move to GPU computing?

    - by skiwi
    Perhaps the title is not the most clear, so let me elaborate it more: I am talking about VM-based languages, by that I mean languages that run on the JVM (java) and for example C#. Also I am talking about 3D graphics, just to be clear. Lately the trend has been that most computing is being done on the GPU and not on the CPU, and since times the issue with programming games on a VM-based language is that garbage collecting may happen randomly. So let's take a look which is responsible for what: Showing the graphics: GPU Uploading graphics to the GPU: CPU? Needs to be done every frame? Calculating physics constraints: GPU Doing the real game logic (Determining when to move objects (independent of physics calculations), processing AI): CPU Is my list actually correct? And if it is, is for example Java becoming more viable? Or is uploading the graphics (vertices) still the most expensive operation? Would like to get more insight into this.

    Read the article

  • Most efficient way to generate 2D portraits

    - by user1221
    Hey, I am not sure if this is a fitting question for gamedev, or if it is too art related. I am currently trying, to create 2D character protraits for my game. At first I tried to draw them and even though it helped polishing my drawing skills the end result either required way too much time or it simply looked like it was created by a grade school kid. So I am currently looking into some tools which from which people like me who are not out of the art-world might benefit. Especially tools which can create a 3D head+hair, so that I can render them. I have tried several 3D generation tools such as makehead and makehuman to create the basic head-shape. But I have to admit I am not well versed in what other options are available/what has the best quality/etc.

    Read the article

  • Continuous Integration, what are the strategies to manage binary content?

    - by sebas
    Currently we are testing various configurations between Feature Branching and CI with Feature toggling. I can see there are several viable options out there for the code, but I also know that CI totally relies on the possibility to merge the code. So I wonder, how do you manage CI with binary data, like art assets? I can also see another problem: all the code can be tested before to commit, I can even validate the data before to commit, but how can I test the art?! Should I use another methodology for art content?

    Read the article

  • Enabling and Disabling Colliders Unity

    - by Blue
    I'm trying to make the collider appear every 1 second. But I can't get the code write. I tried enabling the collider under a boolean and putting a yield to make it every second or so. But it's not working(gives me an error: Update() can not be a coroutine.). How would I fix this? Would I need a timer system and set the collider to be enabled every 'x' seconds and disabled every 'y' seconds? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • Alternatives to voxel-based terrain

    - by Neomex
    Are there any alternatives to voxel based terrains? Such terrain should be fully destructable, allow for arches, overhangs, preserve sharp features where needed and keep consistent topology. Maybe you can explain the problem that makes you ask this question? Voxel based terrain is basically just using a 3D grid of data to store data. There are lots of ways to render that data, but it doesn't get much simpler for storing it. – Byte56 Current isosurface extraction methods aren't most effective/bug-free. Cubical Marching Squares seem to solve most of the issues, however it is a relatively new method and there aren't too many resources about it. (I've found single university paper) Even if we stick to CMS, when we want to add multi-material support, we can either divide surface into multiple meshes, or pass a texture array or texture atlas to shaders, then we are limited to set amount of textures and additionally increase memory-usage alot.

    Read the article

  • How to use a mask texture with Kobold2D

    - by alex
    I am an iOS developer but I'm new to cocos2d. I'm working on new game, I use Kobold2D, have cocos2d installed too, and I want to make this effect: I know how is done with Flash, but can't make it with Kobold2D. There's 2 images with the same size: one is a low-res image for the background and the second is a hi-res over the first one. When the "reticle" mask moves, it reveals the second image inside the circle and the background is visible outside only. I googled with no success, saw some Ray Wenderlich projects they weren't helpful.

    Read the article

  • How to prioritize related game entity components?

    - by Paul Manta
    I want to make a game where you have to run over a bunch of zombies with your car. When moving around, the zombies have a few things to take into consideration: When there's no player around they might just roam about randomly. And even when some other component dictates a specific direction, they should wobble to the left and right randomly (like drunk people). This implies a small, random, deviation in their movement. They should avoid static obstacles. When they see they are headed towards a wall, they should reorient themselves. They should avoid the car. They should try to predict where the car will be based on its velocity and try to move out of the way. When they can, they should try to get near the player. All these types of decisions they have to do seem like they should be implemented in different components. But how should I manage them? How can I give different components different weights that reflect the importance of each decision (in a given situation)? I would need some other component that acts as a manager, but do you have any tips on how I should implement it? Or maybe there's a better solution?...

    Read the article

  • How do I implement powerups for my Breakout clone?

    - by Eva
    I'm making a simple Breakout clone in Python that will have very many powerups/powerdowns (so far I came up with 26). Some will affect the paddle (paddle missile, two paddles, short paddle, etc.), some will affect the ball (slow ball, destructo-ball, invisible ball, etc.), some will affect the bricks (brick scramble, move up, bricks indestructible, etc.), and some will affect other game aspects (extra life, more points, less points, etc.). I'm pretty sure I have the code to draw the falling powerups and test for collisions with the paddle down, but I'm confused about how to code the effects of the powerups. Since there are very many powerups, it seemed inefficient to add specific methods to each component as done in this tutorial. However, I can't think of an other ways to implement the powerups. I found a page that hints at some way to design powerup behavior using classes, but I'm at a loss for how to do that. (A short example would help.) Please give me a short code example of another way to implement the effects of the powerups.

    Read the article

  • Game State / Screen Management

    - by Ashylnn Mac
    What's the best way to handle game states / screens? My problem is this: PlayGameScreen adds a new InventoryGameScreen to the game during it's update. This immediately adds InventoryGameScreen to the array of GameScreens. That's throwing an exception when iterating over the array that the contents of the array have changed. Should I have two more arrays, like screensToBeAdded and screensToBeRemoved and do all the processing for them at the end of the game loop after drawing all the other screens?

    Read the article

  • XNA 4 game for both profiles

    - by Vodácek
    I am writing game in XNA 4 and this version have two profiles hi-def and reach. My problem is that I need to have my game code for each of these profiles and is very uncomfortable to have two projects and do all changes in both of them. My idea was to use preprocessor directive (i am not sure about name of this, http://msdn.microsoft.com/en-us/library/ed8yd1ha%28v=vs.71%29.aspx) and use IF statement at places with problems with profile. There is only problem that program needs to be compiled two times (for each profile) and manually changed directive and project settings to another profile. And my questions are: Is that good way? Is there better and cleaner way how to do this?

    Read the article

  • Question about creating a sprite based 2-D Side Scroller with scaling/zooming

    - by Arthur
    I'm just wondering if anyone can offer any advice on how best to go about creating a 2-D game with zooming/scaling features akin to the early Samurai Showdown games. In this case it would be a side scroller a la Metal Slug, the zooming would come in as more enemy sprites entered the screen, or when facing a large sized boss. A feature that would be both cosmetic as well as functional to the game. I've done some reading and noticed a few suggestions that included drawing different sized sprites, a standard size and zoomed out size. Any thoughts? Thanks for your time.

    Read the article

  • How do I use Content.Load() with raw XML files?

    - by xnanewb
    I'm using the Content.Load() mechanism to load core game definitions from XML files. It works fine, though some definitions should be editable/moddable by the players. Since the content pipeline compiles everything into xnb files, that doesn't work for now. I've seen that the inbuild XNA Song content processor does create 2 files. 1 xnb file which contains meta data for the song and 1 wma file which contains the actual data. I've tried to rebuild that mechanism (so that the second file is the actual xml file), but for some reason I can't use the namespace which contains the IntermediateSerializer class to load the xml (obviously the namespace is only available in a content project?). How can I deploy raw, editable xml files and load them with Content.Load()?

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • How to design brain games [on hold]

    - by samesky
    I will wonder if anybody has some information about designing games for brain improvement. Recently lumosity is into a gear. I guess they research a lot or have experts. But is there any other research paper that is publicly available for designing the brain games ? Any equation or data that can help? Or what characteristics a brain game should have ? I am getting interested on this and search internet a lot, but unfortunately I could not find the core structure of it. It will really a helpful for me if somebody can give some information. Thank you.

    Read the article

  • MouseEvent.CLICK not working? (AS3)

    - by Jake
    ok so here's my code in AS3, I'd like to know why when i actually click on the picture, nothing happens. And if any of you have great tutorial of what to learn after classes/functions in AS3, let me know =D : package { import flash.display.Bitmap; import flash.display.Sprite; import flash.display.Shape; import flash.events.MouseEvent; public class Main extends Sprite { [Embed(source="../Pics/Picture.png")] private var HeroClass:Class; private var hero:Bitmap = new HeroClass(); public function Main():void { addChild(hero); hero.addEventListener(MouseEvent.CLICK, onClick); function onClick(e:MouseEvent):void { trace("hey"); hero.visible = false; } } } }

    Read the article

  • How can I make smoother upwards/downwards controls in pygame?

    - by Zolani13
    This is a loop I use to interpret key events in a python game. # Event Loop for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_a: my_speed = -10; if event.key == pygame.K_d: my_speed = 10; if event.type == pygame.KEYUP: if event.key == pygame.K_a: my_speed = 0; if event.key == pygame.K_d: my_speed = 0; The 'A' key represents up, while the 'D' key represents down. I use this loop within a larger drawing loop, that moves the sprite using this: Paddle1.rect.y += my_speed; I'm just making a simple pong game (as my first real code/non-gamemaker game) but there's a problem between moving upwards <= downwards. Essentially, if I hold a button upwards (or downwards), and then press downwards (or upwards), now holding both buttons, the direction will change, which is a good thing. But if I then release the upward button, then the sprite will stop. It won't continue in the direction of my second input. This kind of key pressing is actually common with WASD users, when changing directions quickly. Few people remember to let go of the first button before pressing the second. But my program doesn't accommodate the habit. I think I understand the reason, which is that when I let go of my first key, the KEYUP event still triggers, setting the speed to 0. I need to make sure that if a key is released, it only sets the speed to 0 if another key isn't being pressed. But the interpreter will only go through one event at a time, I think, so I can't check if a key has been pressed if it's only interpreting the commands for a released key. This is my dilemma. I want set the key controls so that a player doesn't have to press one button at a time to move upwards <= downwards, making it smoother. How can I do that?

    Read the article

  • What is this type of sound effect called?

    - by Fibericon
    There is a sound typically associated with a bright flash of light, which starts with a lower whirring noise, then breaks into a higher pitched sound. What is that type of sound called? I'm not sure how to begin searching for that, so a typical name for it would be very helpful. It's something similar to what occurs at 0:41 in this youtube video (here's a link to a few seconds beforehand), where Naruto 6 tails transforms into Kyuubei in Naruto Generations.

    Read the article

  • How to draw unlimited FPS on Mac OS X with OpenGL?

    - by V1ru8
    I d'like to draw as many frames as possible with OpenGL on Mac OS X to measure the performance on different scenes. What I've tried so far: Using a CVDisplayLink that has NSOpenGLCPSwapInterval set to 0, so it does not sync with the Display. But with that it's still stuck at max 60FPS Using normal -drawRect: with a timer that fires 1/1000sec and calls -setNeedsDisplay: Still not more than 60FPS Same as 2. but I call -display in the timer callback. With that I get the FPS above 60, but it still stops at 100-110 FPS. Although the frame rate should easily be at 10times more. Andy idea how I can really draw as many frames as possible?

    Read the article

  • 3D open source physics engine suitable for mobile platforms (Android and iOS)

    - by lukeluke
    I have made some research and found that bullet, ode, newton and some others are open source physics engines that should be portable enough (but I have never tried to comile/use anyone of them on phones). I am writing my games for mobile platforms in C++, so the engine should be C or C++. I need a fast engine, since mobile platforms have limited resources. I need a free engine. A good design would be nice to have too. What engine is best suited for my task? What I really would like to hear from you is your direct experience. Documentation and support (for example, forum or an IRC channel) is a really important aspect to take into consideration.

    Read the article

  • In a multiplayer game, should I store the list of character names on the Player class?

    - by Gökhan Nas
    I am writing a multiplayer game that has account system and character creation system like standart MMORPGs. I have a question about name creating issue. I think that I can create a static variable on Player class that keeps created player names but it confused me. It will tell me name is valid or unvalid depends on the other players has this name. Questions; Does implementation does make sense ? If i have 1000 players, is it means it consumes 1000 times of memory of this list? Or it just consume as like there is one? What is your suggestion for place that I can keep player name list? A new class?

    Read the article

  • What libgdx project files can I ignore from version control?

    - by Zhen
    In an automatically created libgdx project, what files can I safely tell Git (or other revision control systems) to ignore? I'm considering these: *-android/.settings/ *-android/bin/ *-desktop/.settings/ *-desktop/bin/ *-html/.settings/ *-html/gwt-unitCache/ *-html/war/WEB-INF/classes/ *-html/war/WEB-INF/deploy/ *-html/war/assets/ *-html/war/ */.settings/ */bin/ Am I missing some? Is there a complete list somewhere?

    Read the article

  • Drawing a line using openGL does not work

    - by vikasm
    I am a beginner in OpenGL and tried to write my first program to draw some points and a line. I can see that the window opens with white background but no line is drawn. I was expecting to see red colored (because glColor3f(1.0, 0.0, 0.0);) dots (pixels) and line. But nothing is seen. Here is my code. void init2D(float r, float g, float b) { glClearColor(r,g,b,0.0); glMatrixMode(GL_PROJECTION); gluOrtho2D(0.0, 200.0, 0.0, 150.0); } void display() { glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0, 0.0, 0.0); glBegin(GL_POINTS); for(int i = 0; i < 10; i++) { glVertex2i(10+5*i, 110); } glEnd(); //draw a line glBegin(GL_LINES); glVertex2i(10,10); glVertex2i(100,100); glEnd(); glFlush(); } int main(int argc, char** argv) { //Initialize Glut glutInit(&argc, argv); //setup some memory buffers for our display glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); //set the window size glutInitWindowSize(500, 500); //create the window with the title 'points and lines' glutCreateWindow("Points and Lines"); init2D(0.0, 0.0, 0.0); glutDisplayFunc(display); glutMainLoop(); } I wanted to verify that the glcontext was opening properly and used this code: int main(int argc, char **argv) { glutInit(&argc, argv); //setup some memory buffers for our display glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); //set the window size glutInitWindowSize(500, 500); //create the window with the title 'points and lines' glutCreateWindow("Points and Lines"); char *GL_version=(char *)glGetString(GL_VERSION); puts(GL_version); char *GL_vendor=(char *)glGetString(GL_VENDOR); puts(GL_vendor); char *GL_renderer=(char *)glGetString(GL_RENDERER); puts(GL_renderer); getchar(); return 0; } And the ouput I got was: 3.1.0 - Build 8.15.10.2345 Intel Intel(R) HD Graphics Family Can someone point out what I am doing wrong ? Thanks.

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >