Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 521/1020 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Efficient skeletal animation

    - by Will
    I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on-screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS-like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route?

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • Grpahic hardwares

    - by Vanangamudi
    Which vendor provides better GPGPU. my requirements are confined to rendering utilising the GPU for BSDF building for e.g. Intel started providing Ivy Bridge chipset GPU, which are comparably fast to HD5960 cards. I'm not that against nvidia or amd. but I'm a fan of Intel. how it compares to nvidia in price and performance. if possible may I know, how all of them perform with OpenCL?? I'm not sure if it is right to ask it here. but I don't know where to ask.

    Read the article

  • Drawing a texture line between two vectors in XNA WP7

    - by Krav
    I want to create a simple graph maker in WP7. The goal is to draw a texture line between two vectors what the user defines with touch. I already made the rotation, and it is working, but not correctly, because it doesn't calculate the line's texture height, and because of that, there are too many overlapping textures. So it does draw the line, but too many of them. How could I calculate it correctly? Here is the code: public void DrawLine(Vector2 st,Vector2 dest,NodeUnit EdgeParent,NodeUnit EdgeChild) { float d = Vector2.Distance(st, dest); float rotate = (float)(Math.Atan2(st.Y - dest.Y, st.X - dest.X)); direction = new Vector2(((dest.X - st.X) / (float)d), (dest.Y - st.Y) / (float)d); Vector2 _pos = st; World.TheHive.Add(new LineHiveMind(linetexture, _pos, rotate, EdgeParent, EdgeChild,new List<LineUnit>())); for (int i = 0; i < d; i++) { World.TheHive.Last()._lines.Add(new LineUnit(linetexture, _pos, rotate, EdgeParent, EdgeChild)); _pos += direction; } } d is for the Distance of the st (Starting node) and dest (Destination node) rotate is for rotation direction calculates the direction between the starting and the destination node _pos is for starting position changing Thanks for any suggestions/help!

    Read the article

  • Building (simple) stellar systems

    - by space borg
    hi I'm currently looking at how to simulate easily some stellar systems (meaning some central stars and then some planets with maybe satellites), in order to allow later some space based strategy game (hence with space ships moving around). This should all be based around time (so the state of each system differs through time) I'm quite struggling with the math behind this topic, like for example: - ellipse related math, - creating the path from planet A to B having time in mind (respective positions will change over time)... Do you know of any resources for that ? I wouldn't mind even buying books about it... thanks in advance best space borg side note: how to display all this stuff isn't a matter at this point in time, I'll simple plans for that (basically sticking to 2D and a "high level view" with no space ships/planets details, just markers)

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • Where i must put .xnb files in mono game project using VS2010?

    - by user23899
    Hello there my problem was describe below In the "The Content Pipeline" paragraph http://blogs.msdn.com/b/bobfamiliar/archive/2012/08/07/windows-8-xna-and-monogame-part-3-code-migration-and-windows-8-feature-support.aspx#comments Author describe how fix it using VS2012 put xnb files to \AppX\Content folder but i use VS2010 and mono game templates for it and there is no folders like this so where i must put this asstes to run game correctly

    Read the article

  • How can I get my game to show up in the Games Explorer on Windows?

    - by Kraemer
    I want to create an installer for a game which allows for an icon to be put in the Games Explorer for Vista and Windows 7. I have created the GDF, then built the script for project and obtained the .h, .gdf and .rc files. But I can't compile (using Visual Studio 2010) the .rc file into an executable to be used after that in order to create the installer. I get the following error after I set the executable path: "Could not load file or assembly'Microsoft.VisualStudio.HpcDebugger.Impl, Version 10.0.0.0, Culture=neutral, PublickKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified." Any ideas what I'm doing wrong?

    Read the article

  • Estimating costs in a GOAP system

    - by fullwall
    I'm currently developing a GOAP system in Java. An explanation of GOAP can be found at http://web.media.mit.edu/~jorkin/goap.html. Essentially, it's using A* to plot between Actions that mutate the world state. To provide a fair chance for all Actions and Goals to execute, I'm using a heuristic function to estimate the cost of doing something. What is the best way to estimate this cost so that it is comparable to all the other costs? As an example, estimating the cost of running away from an enemy versus attacking it - how should the cost be calculated to be comparable?

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • How many textures can usually I bind at once?

    - by Avi
    I'm developing a game engine, and it's only going to work on modern (Shader model 4+) hardware. I figure that, by the time I'm done with it, that won't be such an unreasonable requirement. My question is: how many textures can I bind at once on a modern graphics card? 16 would be sufficient. Can I expect most modern graphics cards to support that amount? My GTX 460 appears to support 32, but I have no idea if that's representative of most modern video cards.

    Read the article

  • As a indie, how to protect your game?

    - by user16829
    As a indie, you might not work in a company. And you may have a great game idea and you feel it gonna be a big success. When you released your game. How do you protect it as your own creation? So that someone also can't steal the title and publish a "sequel" e.g. Your-Game-Name 2,3,4. Or even produce by-products like Angry Birds but without your permission. So how we can prevent these from happening by legal methods. Like copyrights, trademarks? If a professional can fill us those info, it will be great.

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • XNA GameTime TotalGameTime slower than real time

    - by robasaurus
    I have set-up an empty test project consisting of a System.Diagnostics.Stopwatch and this in the draw method: spriteBatch.DrawString(font, gameTime.TotalGameTime.TotalSeconds.ToString(), new Vector2(100, 100), Color.White); spriteBatch.DrawString(font, stopwatch.Elapsed.TotalSeconds.ToString(), new Vector2(100, 200), Color.White); The GameTime.TotalGameTime displayed is slower than the stop watch (by about 5 seconds per minute) even though GameTime.IsRunningSlowly is always false, why is this? The reason this is an issue is because I have a server which uses stopwatch and it is faster than my client game. For instance my client notifies the server it has dropped a mine which explodes in one minute. Because the stopwatch is faster the server state explodes the mine before the client and they are out of sync. I don't want to have to notify the client when the server explodes it as this would use unnecessary bandwidth.

    Read the article

  • How can I make a collection of mini-games in XNA where the user can download packs of minigames and the main .exe can run them without being altered?

    - by Pyroka
    I'm currently making a PC game in XNA. It's actually a collection of mini-games (there's 3 mini-games at the moment) however I plan to make and add more, in downloadable 'packs'. My question is, what's the best way to achieve this? Currently my thoughts are: Create a 'game' interface Build games to this interface but create them as .dlls Have the main .exe file scan a directory and load in the .dlls at runtime. I've not messed around with the idea much, but I know there are applications at-least that use this plug-in approach (Notepad++ seems to), but I'm not sure of any games that do (although I'm sure they must exist). However it seems that this is a problem that has been solved previously, so I'm wondering if there's any form of established best-practice.

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Android loading screens blocking, good practice?

    - by Oren
    I've noticed many (if not all) android games don't support the "back" button functionality during their loading screens. Which leads to some frustrating moments when a user accidentally starts up the game and has to wait for the long loading stage to end in order to close it. So my questions are: 1) Why is that ? Is there a good reason to avoid something like asynchronous loading (or some other solution to this problem) in android games ? 2) If there is no good reason not to support this functionality, what would be the best way to accomplish it ?

    Read the article

  • Synchronise graphics and logic code

    - by Skeith
    I have a procedural approach to the game loop that runs various classes. it looks like this: continue any in progress animations check for used input apply AI move things resolve events such as collisions draw it all to screen I have seen a lot of posts about how drawing should be running separately as fast as it can, possibly in another thread. My problem is that if the drawing runs as fast as it, can what happens if it tried to draw while I'm still applying the AI or resolving a collision? It could draw the wrong thing on screen. This seems to be a well established idea so there must be an explanation to this problem as I just cant get my head around it. The only solution I have is to update the screen so fast that any errors like that get refreshed before we see them but that sounds hacky. So how does this work / how would you implement it so that they are in sync but running at different speeds?

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Creating a steady rhythm for music-based game in XNA

    - by A-Type
    I'm looking to develop a game for Windows Phone to explore an idea I had which involves the user building notes into a sequencer while playing a puzzle game. The issue I'm running into is that, while my implementation is very close to being on-beat, there is the occasional pause between beats which makes the whole thing sound sloppy. I'm just not sure how to get around this inside XNA's infrastructure. Currently I'm running this code in the Update method of my GameBoard: public void Update(GameTime gameTime) { onBeat = IsOnBeat(gameTime); [...] if (onBeat) BeatUpdate(); } private bool IsOnBeat(GameTime gameTime) { beatTime += gameTime.ElapsedGameTime.TotalSeconds; if (Math.Abs(beatTime - beatLength) < 0.0166666) { beatTime -= beatLength; return true; } return false; } private void BeatUpdate() { cursor.BeatUpdate(); board.CursorPass((int)cursor.CursorPosition % Board.GRID_WIDTH); } Update checks to see if the time is on beat, and if it is, it calls the BeatUpdate method which moves the cursor over the board (sequencer). The cursor reports its X position to the board, which then plays any notes which are in that position on the sequencer. Notes are SoundEffectInstances, preloaded and ready to play. Oh, and TargetElapsedTime is set to 166666, or 60FPS target. Obviously totaling up the time and then subtracting isn't the most accurate way to go but I can't figure out a way to work within XNA's system in order to overcome this issue. This current system is just horribly unstable. Beats lag and fire too early and it's obvious. I thought about perhaps some sort of threaded solution but I'm not familiar enough with multithreading to figure out how that would work. Any ideas?

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • Are there any reasons to use Legacy (2.X) OpenGL?

    - by user27886
    The benefits are well documented of the Modern OpenGL 3.X & 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >