Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 528/1039 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • How can I get my game to show up in the Games Explorer on Windows?

    - by Kraemer
    I want to create an installer for a game which allows for an icon to be put in the Games Explorer for Vista and Windows 7. I have created the GDF, then built the script for project and obtained the .h, .gdf and .rc files. But I can't compile (using Visual Studio 2010) the .rc file into an executable to be used after that in order to create the installer. I get the following error after I set the executable path: "Could not load file or assembly'Microsoft.VisualStudio.HpcDebugger.Impl, Version 10.0.0.0, Culture=neutral, PublickKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified." Any ideas what I'm doing wrong?

    Read the article

  • Where i must put .xnb files in mono game project using VS2010?

    - by user23899
    Hello there my problem was describe below In the "The Content Pipeline" paragraph http://blogs.msdn.com/b/bobfamiliar/archive/2012/08/07/windows-8-xna-and-monogame-part-3-code-migration-and-windows-8-feature-support.aspx#comments Author describe how fix it using VS2012 put xnb files to \AppX\Content folder but i use VS2010 and mono game templates for it and there is no folders like this so where i must put this asstes to run game correctly

    Read the article

  • How to calculate direction from initial point and another point?

    - by Dvole
    I'm making a simple game where I shoot things from a certain point on screen (A). I tap the screen and shoot the projectile from initial point(A) to the tap point(B). But I want the projectile to move along the same path instead and fly out of bounds of the screen. How do I calculate a point that is on the same line that these two points, but further away? This is a simple math, but I can't figure it out.

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

  • What is the standard technique for shifting the frames of a sprite according to user input?

    - by virtual__
    From my own experience, I developed two techniques for changing the sprites of a character that's reacting to user input -- this in the context of a classic 2D platformer. The first one is to store all character's pixmaps in a list, putting the index of the currently used pixmap in an ordinary variable. This way, every time the player presses a key -- say the right arrow for moving the character forward -- the graphics engine sees what's the next pixmap to draw, draws it, and increments the index counter. That's a pretty common approach I believe, the problem is that in this case the animation's quality depends not only on the number of sprites available but also on how often your engine listens to user input. The second technique is to actually play an animation every key press event. For this you can use any sort of animation framework you want. It's only necessary to set the timer, the animation steps and to call the animation's play() method on your key press event handler. The problem with that approach is that is lacks responsiveness, since the character won't react to any input while the current animation is still being played. What I want to know is whether you are using one of these techniques -- or something similar -- in your games, or whether there's a standard method for animating sprites out there that's widely known by everybody but me.

    Read the article

  • How can I make a collection of mini-games in XNA where the user can download packs of minigames and the main .exe can run them without being altered?

    - by Pyroka
    I'm currently making a PC game in XNA. It's actually a collection of mini-games (there's 3 mini-games at the moment) however I plan to make and add more, in downloadable 'packs'. My question is, what's the best way to achieve this? Currently my thoughts are: Create a 'game' interface Build games to this interface but create them as .dlls Have the main .exe file scan a directory and load in the .dlls at runtime. I've not messed around with the idea much, but I know there are applications at-least that use this plug-in approach (Notepad++ seems to), but I'm not sure of any games that do (although I'm sure they must exist). However it seems that this is a problem that has been solved previously, so I'm wondering if there's any form of established best-practice.

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Rotating a Quad around it center

    - by Trixmix
    How can you rotate a quad around its center? This is what im trying to do but it aint working: GL11.glTranslatef(x-getWidth()/2, y-getHeight()/2, 0); GL11.glRotatef(30, 0.0f, 0.0f, 1.0f); GL11.glTranslatef(x+getWidth()/2, y+getHeight()/2, 0); DRAW my main problem is that it renders it off the screen.. draw code: GL11.glBegin(GL11.GL_QUADS); { GL11.glTexCoord2f(0, 0); GL11.glVertex2f(0, 0); GL11.glTexCoord2f(0, getTexture().getHeight()); GL11.glVertex2f(0, height); GL11.glTexCoord2f(getTexture().getWidth(), getTexture().getHeight()); GL11.glVertex2f(width,height); GL11.glTexCoord2f(getTexture().getWidth(), 0); GL11.glVertex2f(width,0); } GL11.glEnd();

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • How many textures can usually I bind at once?

    - by Avi
    I'm developing a game engine, and it's only going to work on modern (Shader model 4+) hardware. I figure that, by the time I'm done with it, that won't be such an unreasonable requirement. My question is: how many textures can I bind at once on a modern graphics card? 16 would be sufficient. Can I expect most modern graphics cards to support that amount? My GTX 460 appears to support 32, but I have no idea if that's representative of most modern video cards.

    Read the article

  • Building (simple) stellar systems

    - by space borg
    hi I'm currently looking at how to simulate easily some stellar systems (meaning some central stars and then some planets with maybe satellites), in order to allow later some space based strategy game (hence with space ships moving around). This should all be based around time (so the state of each system differs through time) I'm quite struggling with the math behind this topic, like for example: - ellipse related math, - creating the path from planet A to B having time in mind (respective positions will change over time)... Do you know of any resources for that ? I wouldn't mind even buying books about it... thanks in advance best space borg side note: how to display all this stuff isn't a matter at this point in time, I'll simple plans for that (basically sticking to 2D and a "high level view" with no space ships/planets details, just markers)

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • HowTo Enable jBullet DebugMode

    - by Kenneth Bray
    I would like to render the physics world of jBullet to debug some issues in my game, and I am not finding too much on enabling the debugDraw method of jBullet. Do I need to write my own debugDraw method, or is there an easier way to draw the physics models to the screen? If there is already a built in method I would prefer to use that, otherwise I guess I will start making my own functions to handle this.

    Read the article

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • Do I need the 'w' component in my Vector class?

    - by bobobobo
    Assume you're writing matrix code that handles rotation, translation etc for 3d space. Now the transformation matrices have to be 4x4 to fit the translation component in. However, you don't actually need to store a w component in the vector do you? Even in perspective division, you can simply compute and store w outside of the vector, and perspective divide before returning from the method. For example: // post multiply vec2=matrix*vector Vector operator*( const Matrix & a, const Vector& v ) { Vector r ; // do matrix mult r.x = a._11*v.x + a._12*v.y ... real w = a._41*v.x + a._42*v.y ... // perspective divide r /= w ; return r ; } Is there a point in storing w in the Vector class?

    Read the article

  • Efficient skeletal animation

    - by Will
    I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on-screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS-like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route?

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • How many BasicEffects do you have in a Game? What is the best way to render multiple objects/shapes at once?

    - by Deukalion
    I'm trying to understand 3D rendering and it seems that everytime you render a new object (A 3D Cube or something) you need to have a new BasicEffect for each Box you render unless you want the exact same texture? ...so if I have over a hundred boxes with each different textures, I need at least as many BasicEffects? Will that not be "too much" for the CPU/GPU in the end or result in lagging? Is there any good way to render multiple objects (cubes or other shapes) at the same time? I've tried changing the BasicEffect.Texture with each cube drawn, but it resulting in changing the first Cube's texture too. Any suggestions would be really appreciated, I'm really new to 3D in XNA so I'm trying to wrap my head around the best methods for example render a Map with objects (of shapes).

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Open Source Analysis

    - by BluFire
    There are a lot of code in open source projects, looking at all of the code is time consuming and can be confusing to a novice like me. Are there any sections of open-source projects that should be focused on? What should I focus on when I look at code? I'm asking this in general because if I ask this specifically, the question will only apply in one or two projects rather than an entire group of projects ranging in different types of games and difficulty.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >