Search Results

Search found 5072 results on 203 pages for 'graph drawing'.

Page 137/203 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Gave a talk at SoCal Code Camp at USC today titled “Linq to Objects A-Z”

    - by dotneteer
    I gave a talk at SoCal Code Camp on Linq to Objects. With careful categorization of Linq functions, I was able to cover the entire set of Linq functions in only 35 minutes. I was able to spend the rest time on demos. In my first demo, I show I was able to write a top 20 URL type of query using 4 lines of library code and 9 line of Linq code without tools like Log Parser. I also demonstrated that I only need to change 2 lines of code from querying a single log file to a whole directory of log files. It would be as simple to run the query against multiple servers in parallel. In my second demo, I discussed how to turn into graph depth-first-search (DFS) and breath-first-search (BFS) in the a Linq queryable problem. The class LingToGraph contains the only DFS and BFS code I ever have to write; the rest could be done the the lambda passed to the DFS or BFS calls. In future blogs, I will provide more details explanation of code. Links: Link to Powerpoint slides. Link to demos.

    Read the article

  • Failing Screen Resize Method

    - by StrongJoshua
    So I want my game to draw to a specific "optimal" size and then be stretched to fit screens that are a different size. I'm using LibGDX and figured that I could just draw everything to a FrameBuffer and then resize that buffer to the appropriate size when drawing it to the actual display. However, my method does not work, it just results in a black screen with the top right quarter of the screen white.Intermediary is the FBO, interMatrix is a Matrix4 object, and camera is an OrthographicCamera. @Override public void render() { // update actors currentStage.act(); //render to intermediary buffer batch.setProjectionMatrix(interMatrix); intermediary.begin(); batch.begin(); currentStage.draw(); batch.flush(); intermediary.end(); //resize to actual width and height Sprite s = new Sprite(intermediary.getColorBufferTexture()); s.flip(true, false); batch.setProjectionMatrix(camera.combined); batch.draw(s.getTexture(), 0, 0, width, height); batch.end(); } These are the constructors for the above mentioned objects (GAME_WIDTH and HEIGHT are the "optimal" settings, width and height are the actual sizes, which are the same when running on desktop). intermediary = new FrameBuffer(Format.RGBA8888, GAME_WIDTH, GAME_HEIGHT, false); interMatrix = new Matrix4(); camera = new OrthographicCamera(width, height); interMatrix.setToOrtho2D(0, 0, GAME_WIDTH, GAME_HEIGHT); Is there a better way of doing this or can is this a viable option and how do I fix what I have?

    Read the article

  • Text on a model

    - by alecnash
    I am trying to put some text on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3D button). It now looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • How can I make smoother upwards/downwards controls in pygame?

    - by Zolani13
    This is a loop I use to interpret key events in a python game. # Event Loop for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_a: my_speed = -10; if event.key == pygame.K_d: my_speed = 10; if event.type == pygame.KEYUP: if event.key == pygame.K_a: my_speed = 0; if event.key == pygame.K_d: my_speed = 0; The 'A' key represents up, while the 'D' key represents down. I use this loop within a larger drawing loop, that moves the sprite using this: Paddle1.rect.y += my_speed; I'm just making a simple pong game (as my first real code/non-gamemaker game) but there's a problem between moving upwards <= downwards. Essentially, if I hold a button upwards (or downwards), and then press downwards (or upwards), now holding both buttons, the direction will change, which is a good thing. But if I then release the upward button, then the sprite will stop. It won't continue in the direction of my second input. This kind of key pressing is actually common with WASD users, when changing directions quickly. Few people remember to let go of the first button before pressing the second. But my program doesn't accommodate the habit. I think I understand the reason, which is that when I let go of my first key, the KEYUP event still triggers, setting the speed to 0. I need to make sure that if a key is released, it only sets the speed to 0 if another key isn't being pressed. But the interpreter will only go through one event at a time, I think, so I can't check if a key has been pressed if it's only interpreting the commands for a released key. This is my dilemma. I want set the key controls so that a player doesn't have to press one button at a time to move upwards <= downwards, making it smoother. How can I do that?

    Read the article

  • What's the right/standard way of achieving separation of concerns?

    - by Ghanima
    Some background: I want to start developing games, and taking some of the advice given in this site, I've started with something simple and familiar, such as pong, tetris, etc. I want to take as much time as needed to make sure that I have the basics right before moving on to something bigger. I have medium programming experience but I realize making games is a different thing. I find myself wondering many things like should this be in a separate class? Should this module handle this stuff or is it better to let other modules have that kind of functionality? For example, the bouncing of a ball in pong, right now is handled in the ball module, but maybe it's better that some other module did it. Right now I have different modules: one for the graphics, one for the game logic, and others for the objects (depending on the kind of movement required, not all the objects are alike). I know I am asking a lot, any tips you have will be very much appreciated. Short question: What's the right or standard way of separating the modules? What have you found most effective? Is it enough to just keep the drawing (graphics) and the logic separate? Is it necessary to have a lot of classes? (for example for the objects in the game, to handle the movement, etc)

    Read the article

  • Loading images in XNA 4.0; "Cannot Open File" Problems

    - by user32623
    Okay, I'm writing a game in C#/XNA 4.0 and am utterly stumped at my current juncture: Sprite animation. I understand how it works and have all the code in place, but my ContentLoader won't open my file... Basically, my directory looks like this: //WindowsGame1 - "Game1.cs" - //Classes - "NPC.cs" - Content Reference - //Images - "Monster.png" Inside my NPC class, I have all the essential drawing functions, i.e. LoadContent, Draw, Update. And I can get the game to find the correct file and attempt to open it, but when it tries, it throws an exception and tells me it can't open the file. This is how my code in my NPC class looks: Texture2D NPCImage; Vector2 NPCPosition; Animation NPCAnimation = new Animation(); public void Initialize() { NPCAnimation.Initialize(NPCPosition, new Vector2(4, 4)); } public void LoadContent(ContentManager Content) { NPCImage = Content.Load<Texture2D>("_InsertImageFilePathHere_"); NPCAnimation.AnimationImage = NPCImage; } The rest of the code is irrelevant at this point because I can't even get the image to load. I think it might have to do with a directory problem, but I also know little to nothing about spriting or working with images or animations in my code. Any help is appreciated. Not sure if I provided enough information here, so let me know if more is needed! Also, what would be the correct way to direct that Content.Load to Monster.png given the current directory situation? Right now I just have it using the full path from the C:// drive. Thanks in advance!

    Read the article

  • How do i impress employers with my resume?

    - by acidzombie24
    I built a entire website from scratch in 10days which looks and feels professional with the site being unique. The site has features like logging in, sending activation emails, tag/content search (lucence.net), syntax highlighting (prettify) and a diff (one of the js diffs), markup for comments all on this site and autocomplete in a textbox (remember, 10days). I wrote i have 5+ years of C# experience (i could lie and say more but smart employers will know its only 8 years old and 1.1 is very different from what we use now). I had employers REPEATEDLY say they are looking for someone who has more C# experience... wtf. Maybe they don't read my CV, maybe they dont believe it or ignore me because i am not yet a graduate. I laughed when i first read Steve Yegge The Five Essential Phone Screen Questions as i knew all of that (although i still never used graph datastruct nor know much about it). I'm pretty sure competency wise i can do the job. I am also positive no one noticed i have markup, a diff, autocomplete nor email activation/forget password (i offer a test user account). So maybe my site/example work isnt impressive bc you dont realize what is in it. In short i dont think they read my CV or notice my site. How do i impress employers? PS: The problem is i dont get to the interview. I had one and ruined it by speaking too technical to the PM because i was nervous. The other 25+ jobs either didnt contact me or was kind enough to send a rejection email.

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • List<T>.AddRange is causing a brief Update/Draw delay

    - by Justin Skiles
    I have a list of entities which implement an ICollidable interface. This interface is used to resolve collisions between entities. My entities are thus: Players Enemies Projectiles Items Tiles On each game update (about 60 t/s), I am clearing the list and adding the current entities based on the game state. I am accomplishing this via: collidableEntities.Clear(); collidableEntities.AddRange(players); collidableEntities.AddRange(enemies); collidableEntities.AddRange(projectiles); collidableEntities.AddRange(items); collidableEntities.AddRange(camera.VisibleTiles); Everything works fine until I add the visible tiles to the list. The first ~1-2 seconds of running the game loop causes a visible hiccup that delays drawing (so I can see a jitter in the rendering). I can literally remove/add the line that adds the tiles and see the jitter occur and not occur, so I have narrowed it down to that line. My question is, why? The list of VisibleTiles is about 450-500 tiles, so it's really not that much data. Each tile contains a Texture2D (image) and a Vector2 (position) to determine what is rendered and where. I'm going to keep looking, but from the top of my head, I can't understand why only the first 1-2 seconds hiccups but is then smooth from there on out. Any advice is appreciated.

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • How can I determine the first visible tile in an isometric perspective?

    - by alekop
    I am trying to render the visible portion of a diamond-shaped isometric map. The "world" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The "view" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking "how can I find the intersection between the world X axis and the view X axis?"

    Read the article

  • Ranking players depending on decision making during a game

    - by tabchas
    How would I go about a ranking system for players that play a game? Basically, looking at video games, players throughout the game make critical decisions that ultimately impact the end game result. Is there a way or how would I go about a way to translate some of those factors (leveling up certain skills, purchasing certain items, etc.) into something like a curve that can be plotted on a graph? This game that I would like to implement this is League of Legends. Example: Player is Level 1 in the beginning. Gets a kill very early in the game (he gets gold because of the kill and it increases his "power curve"), and purchases attack damage (gives him more damage which also increases his "power curve". However, the player that he killed (Player 2), buys armor (counters attack damage). This slightly increases Player 2's own power curve, and reduces Player 1's power curve. There's many factors I would like to take into account. These relative factors (example: BECAUSE Player 2 built armor, and I am mainly attack damage, it lowers my OWN power curve) seem the hardest to implement. My question is this: Is there a certain way to approach this task? Are there similar theoretical concepts behind ranking systems that I should read up on? I've seen the ELO system, but it doesn't seem what I want since it simply takes into account wins and losses.

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • Rendering multiple squares fast?

    - by Sam
    so I'm doing my first steps with openGL development on android and I'm kinda stuck at some serious performance issues... What I'm trying to do is render a whole grid of single colored squares on to the screen and I'm getting framerates of ~7FPS. The squares are 9px in size right now with one pixel border in between, so I get a few thousand of them. I have a class "Square" and the Renderer iterates over all Squares every frame and calls the draw() method of each (just the iteration is fast enough, with no openGL code the whole thing runs smootlhy at 60FPS). Right now the draw() method looks like this: // Prepare the square coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the square GLES20.glUniform4fv(mColorHandle, 1, color, 0); // Draw the square GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); So its actually only 3 openGL calls. Everything else (loading shaders, filling buffers, getting appropriate handles, etc.) is done in the Constructor and things like the Program and the handles are also static attributes. What am I missing here, why is it rendering so slow? I've also tried loading the buffer data into VBOs, but this is actually slower... Maybe I did something wrong though. Any help greatly appreciated! :)

    Read the article

  • why are transaction monitors on decline? or are they?

    - by mrkafk
    http://www.itjobswatch.co.uk/jobs/uk/cics.do http://www.itjobswatch.co.uk/jobs/uk/tuxedo.do Look at the demand for programmers (% of job ads that the keyword appears), first graph under the table. It seems like demand for CICS, Tuxedo has fallen from 2.5%/1% respectively to almost zero. To me, it seems bizarre: now we have more networked and internet enabled machines than ever before. And most of them are talking to some kind of database. So it would seem that use of products whose developers spent last 20-30 years working on distributing and coordinating and optimizing transactions should be on the rise. And it appears they're not. I can see a few causes but can't tell whether they are true: we forgot that concurrency and distribution are really hard, and redoing it all by ourselves, in Java, badly. Erlang killed them all. Projects nowadays have changed character, like most business software has already been built and we're all doing internet services, using stuff like Node.js, Erlang, Haskell. (I've used RabbitMQ which is written in Erlang, "but it was small specialized side project" kind of thing). BigData is the emphasis now and BigData doesn't need transactions very much (?). None of those explanations seem particularly convincing to me, which is why I'm looking for better one. Anyone?

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • Implementing Camera Zoom in a 2D Engine

    - by Luke
    I'm currently trying to implement camera scaling/zoom in my 2D Engine. Normally I calculate the Sprite's drawing size and position similar to this pseudo code: render() { var x = sprite.x; var y = sprite.y; var sizeX = sprite.width * sprite.scaleX; // width of the sprite on the screen var sizeY = sprite.height * sprite.scaleY; // height of the sprite on the screen } To implement the scaling i changed the code to this: class Camera { var scaleX; var scaleY; var zoom; var finalScaleX; // = scaleX * zoom var finalScaleY; // = scaleY * zoom } render() { var x = sprite.x * Camera.finalScaleX; var y = sprite.y * Camera.finalScaleY; var sizeX = sprite.width * sprite.scaleX * Camera.finalScaleX; var sizeY = sprite.height * sprite.scaleY * Camera.finalScaleY; } The problem is that when the zoom is smaller than 1.0 all sprites are moved toward the top-left corner of the screen. This is expected when looking at the code but i want the camera to zoom on the center of the screen. Any tips on how to do that are welcome. :)

    Read the article

  • String on a model

    - by alecnash
    I am trying to put a sting on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3d button) it looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Is application-specific data required for good unit testing?

    - by stinkycheeseman
    I am writing unit tests for a fairly simple function that depends on a fairly complicated set of data. Essentially, the object I am manipulating represents a graph and this function determines whether to chart a line, bar, or pie chart based on the data that came back from the server. This is a simplified version, using jQuery: setDefaultChartType: function (graphObject) { var prop1 = graphObject.properties.key; var numCols = 0; $.each(graphObject.columns, function (colIndex, column) { numCols++; }); if ( numCols > 6 || ( prop1 > 1 && graphObject.data.length == 1) ) { graphObject.setChartType("line"); } else if ( numCols <=6 && prop1 == 1 ) { graphObject.setChartType("bar"); } else if ( numCols <=6 && prop1 > 1 ) { graphObject.setChartType("pie"); } } My question is, should I use mock data that is procured from the actual database? Or can I just fabricate data that fits the different cases? I'm afraid that fabricating data will not expose bugs arising from changes in the database, but on the other hand, it would require a lot more effort to keep the test data up-to-date that I'm not sure is necessary.

    Read the article

  • Draw "vision cone" / targetting element onto game world

    - by gkimsey
    I'm wanting to indicate various things using a "pie slice" sort of shape as below. Similar to vision cones in stealth game minimaps, or targetting indicators in RTS type games for frontal area attacks. Something generic enough to be used for both would be ideal. I need to be able to procedurally (and efficiently) change things like the slice width and length, color, transparency, position in the world, etc. For my particular situation, there's no concern with elevation, funky terrain, or really any third axis at all as far as this element is concerned. I have two first inclinations on how to accomplish this: 1) Manually generate the vertices for a main triangle, (possibly two, superimposed to get the border effect), a handful more to approximate the arc at the end, and roll it into a mesh. 2) Use some sort of 2D drawing library to create a circle and mask it off at the right angles, render to texture, and use that. For reference, I have some experience with Ogre3D, but I'm not attached to it as this is a mostly academic pursuit at the moment. Other technologies that might be better at accomplishing this are more than welcome. Finally, I'm kind of curious about how to do a "flashlight" or similar 3D effect that could produce the same result, but on all surfaces in the lit area.

    Read the article

  • What is the simplest way for a slippy SVG visualization?

    - by totymedli
    I have a big SVG file representing a complicated graph with hundreds of points. I want to represent this in a web page. My idea was that I could make it like Google Maps represent their maps, in those slippy, dragable, moveable maps. I'am looking for an easy and fast JavaScript library which could do the work. What I need for my "map" is the drag/move, zoom ability, and some way to click on the points of the picture, which makes a little information apear about that point, like Google maps markers. I'am looking for a free/open source library. I saw some solutions but I'am uncertain about them, and none of them seemed to be perfet: Polymaps - I love the technique it uses, but I don't know much about this library. Leaflet - I love the simplicity of it, but I dont know how could I apply it for my SVG. Raphael - I heard the awesomeness of this, but It seemed a lots of work to do this task. What would be the best/easiest solution for my problem, and what is your opinion aboute the above libraries?

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • Simple (and fast) dices physics

    - by Markus von Broady
    I'm programming a throw of 5 dices in Actionscript 3 + AwayPhysics (BulletPhysics port). I had a lot of fun tweaking frictions, masses etc. and in the end I found best results with more physics ticks per frame. Currently I use 10 ticks per frame (1/60 s) and it's OK, though I see a difference in plus for 20 ticks. Even though it's only 5 cubes (dices) in a box (or a floor with 3 walls really) I can't simulate 20 ticks in a frame and keep FPS at 60 on a medium-aged PC. That's why I decided to precompute frames for animation, finishing it in around 1700 ticks in 2 seconds. The flash player is freezed for these 2 seconds, and I'm afraid that this result will be more of a 5 seconds or even more, if I'll simulate multi-threading and compute frames in background of some other heavy processes and CPU drawing (dices is only a part of this game). Because I want both players to see dices roll in same way, I can't compute physics when having free resources, and build a buffer for at least one throw of each type (where type is number of dices thrown). I'm afraid players will see a "preparing dices........." message too often and for too long. I think the only solution to this problem is replacing PhysicsEngine with something simpler, or creating own physicsEngine. Do You have any formulas for cube-cube and cube-wall collision detection, and for calculating how their angular and linear velocities should change after a collision occurs?

    Read the article

  • How to better explain complex software process in software specs?

    - by Lostsoul
    I'm really struggling with my software specs. I am not a professional programmer but enjoy doing it for fun and made some software that I want to sell later but I'm not happy with the code quality. So I wanted to hire a real developer to rewrite my software in a more professional way so it will be maintainable by other developers in the future. I read and found some sample specs and made my own by applying their structure to my document and wanted to get my developer friend to read it and give me advice. After an hour and a half he understood exactly what I was trying to do and how I did it(my algorithms,stack,etc.). How can I get better at explaining things to developers? I add many details and explanations for everything(including working code) but I'm unsure the best way I can learn to pass detailed domain knowledge(my software applies big data, machine learning, graph theory to finance). My end goal is to get them to understand as much as possible from the document and then ask anything they do not understand, but right now it seems they need to extract alot of information from me. How can I get better at communicating domain knowledge to developers?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >