Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 562/1022 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • CreateDXGIFactory Doesn't Let Program Exit

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11.

    Read the article

  • How to find 2D grid cells swept by a moving circle?

    - by Nevermind
    I'm making a game based on a 2D grid, with some cells passable and some not. Dynamic objects can move continuously, independent of the grid, but need to collide with impassable cells. I wrote an algorithm to trace a ray against the grid, that gives me all cells that ray intersects. However, actual object are not point-sized; I'm currently representing them as circles. But I can't figure out an effective algorithm to trace a moving circle. Here's a picture of what I need: The numbers show in what order the circle collides with grid cells. Does anybody know the algorithm to find these collisions? Preferably in C#. Update The circle can be bigger than a single grid cell.

    Read the article

  • Problem texturing with opengl

    - by Killrazor
    Hello! I'm having problems making a simple sprite rendering. I load 2 different textures. Then, I bind these textures and draw 2 squares, one with each texture. But only the texture of the first rendered object is drawn in both squares. Its like if I'd only use a texture or as if glBindTexture don't work properly. I know that GL is a state machine, but I think that you only need to change active texture with glBindTexture. I load texture with this method: bool CTexture::generate( utils::CImageBuff* img ) { assert(img); m_image = img; CHECKGL(glGenTextures(1,&m_textureID)); CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR)); //CHECKGL(glTexImage2D(GL_TEXTURE_2D,0,img->getBpp(),img->getWitdh(),img->getHeight(),0,img->getFormat(),GL_UNSIGNED_BYTE,img->getImgData())); CHECKGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->getWitdh(), img->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img->getImgData())); return true; } And I bind textures with this function: void CTexture::bind() { CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); } Also, I draw sprites with this method void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_QUADS)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } Could you help me? All help is welcome. Thanks!!

    Read the article

  • moving in the wrong direction

    - by Will
    Solution: To move a unit forward: forward = Quaternion(0,0,0,1) rotation.normalize() # ocassionally ... pos += ((rotation * forward) * rotation.conjugated()).xyz().normalized() * speed I think the trouble stemmed from how the Euclid math library was doing Quaternion*Vector3 multiplication, although I can't see it. I have a vec3 position, a quaternion for rotation and a speed. I compute the player position like this: rot *= Quaternion().rotate_euler(0.,roll_speed,pitch_speed) rot.normalize() pos += rot.conjugated() * Vector3(0.,0.,-speed) However, printing the pos to console, I can see that I only ever seem to travel on the x-axis. When I draw the scene using the rot quaternion to rotate my camera, it shows a proper orientation. What am I doing wrong? Here's an example: You start off with rotation being an identity quaternion: w=1,x=0,y=0,z=0 You move forward; the code correctly decrements the Z You then pitch right over to face the other way; if you spin only 175deg it'll go in right direction; you have to spin past 180deg. It doesn't matter which direction you spin in, up or down, though Your quaternion can then be something like: w=0.1,x=0.1,y=0,z=0 And moving forward, you actually move backward?! (I am using the euclid Python module, but its the same as every other conjulate) The code can be tried online at http://williame.github.com/ludum_dare_24_evolution/ The only key that adjusts the speed is W and S. The arrow keys only adjust the pitch/roll. At first you can fly ok, but after a bit of weaving around you end up getting sucked towards one of the sides. The code is https://github.com/williame/ludum_dare_24_evolution/blob/cbacf61a7159d2c83a2187af5f2015b2dde28687/tiny1web.py#L102

    Read the article

  • Retrieve the coordinates of the *occluding* (closest/drawn) pixels during 3D overlap, using OpenGL?

    - by Big Rich
    Hi, Sorry if the question is not worded well, I'm a new to both 3D and OpenGL. How could I go about obtaining the 3D coordinates of the occluding object, at the point where occlusion is happening (i.e. the 'intersection' of the object in front/closest to the screen)? Just to offer a [very] rudimentary, visual, example, if you were to form an index-finger cross, with your right hand closest to your face, I'd like to know the coordinates of the part of your right finger which obscures the other finger (obviously back within the OpenGL context - no jokers ;-) ). If there is a way to find out both about the occluder (hider) and the occluded (hidden) objects in OpenGL, then that would be of great use, also. Cheers Rich

    Read the article

  • Free movement in a tile-based isometric game

    - by xtr486
    Is there a reasonable easy way to implement free movement in a tile-based isometric game? Meaning that the player wouldn't just instantly jump from one tile to another or not be "snapped" to the grid (for example, if the movement between tiles were animated but you'd be locked from doing anything before the animation finishes). I'm a really beginner with anything related to game programming, but with the help of this site and some other resources it was quite easy to do the basic stuff, but I haven't been able to find any useful resources for this particular problem. Currently I've improvised something close to this: http://jsfiddle.net/KwW5b/4/ (WASD movement). The idea for the movement was to use the mouse map to detect when the player has moved to a different tile and then flip the offsets, and for the most part it works correctly (each corner makes the player move to wrong location: see http://www.youtube.com/watch?v=0xr15IaOhrI, which is probably because I couldn't get the full mouse map work properly), but I have no illusions that it is even close to a good/sane solution. And anyway, it's mostly just to demonstrate what kind of a thing I'd like to implement.

    Read the article

  • glutPostRedisplay() does not update display

    - by A D
    I am currently drawing a rectangle to the screen and would like to move it by using the arrow keys. However, when I press an arrow key the vertex data changes but the display does refresh to reflect these changes, even though I am calling glutPostRedisplay(). Is there something else that I must do? My code: #include <GL/glew.h> #include <GL/freeglut.h> #include <GL/freeglut_ext.h> #include <iostream> #include "Shaders.h" using namespace std; const int NUM_VERTICES = 6; const GLfloat POS_Y = -0.1; const GLfloat NEG_Y = -0.01; struct Vertex { GLfloat x; GLfloat y; Vertex() : x(0), y(0) {} Vertex(GLfloat givenX, GLfloat givenY) : x(givenX), y(givenY) {} }; Vertex left_paddle[NUM_VERTICES]; void init() { glClearColor(1.0f, 1.0f, 1.0f, 0.0f); left_paddle[0] = Vertex(-0.95f, 0.95f); left_paddle[1] = Vertex(-0.95f, 0.0f); left_paddle[2] = Vertex(-0.85f, 0.95f); left_paddle[3] = Vertex(-0.85f, 0.95f); left_paddle[4] = Vertex(-0.95f, 0.0f); left_paddle[5] = Vertex(-0.85f, 0.0f); GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); GLuint buffer; glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(left_paddle), NULL, GL_STATIC_DRAW); GLuint program = init_shaders( "vshader.glsl", "fshader.glsl" ); glUseProgram( program ); GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindVertexArray(vao); } void movePaddle(Vertex* array, GLfloat change) { for(int i = 0; i < NUM_VERTICES; i++) { array[i].y = array[i].y + change; } glutPostRedisplay(); } void special( int key, int x, int y ) { switch ( key ) { case GLUT_KEY_DOWN: movePaddle(left_paddle, NEG_Y); break; } } void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 6); glutSwapBuffers(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(500,500); glutCreateWindow("Rectangle"); glewInit(); init(); glutDisplayFunc(display); glutSpecialFunc(special); glutMainLoop(); return 0; }

    Read the article

  • How can a pygame image be colored?

    - by Juicy
    I'm writing a 2d particle system for a game in Pygame[1]. For the particles, I have an image surface loaded from a file -- basically a white primitive drawn over a transparent background. I'd like the particle engine to emit variously colored particles, but I'm not sure how to tell Pygame to color the surface. I've looked through what passes for documentation, but I'm having trouble finding anything. [1] Yeah, I don't really like Pygame, but my course insists I write this project in Python.

    Read the article

  • Player position triggering teleports

    - by jSherz
    I'm developing a Minecraft plugin (bukkit) in which a server admin can create 'portals' - a small region that will teleport any players who enter it. I have the teleportation sorted and I know how I could define areas that the player's position could be tested against. This would involve an ArrayList containing the zones and then hooking the PlayerMoveEvent so that the ArrayList is searched each time for a matching portal region. Although this method would work, I doubt that it would be very efficient when 100+ players are all moving around at the same time. Is there a better way of checking a player position against a set of 'zones' / regions?

    Read the article

  • How do I convert mouse co-ordinates in Slick2d java?

    - by Trycon
    I'm really new in Java and I really want to how do I convert the mouse clicks to co-ordinates in game. My game moves its images so that the camera could stay with the character. I follwed thenewboston tutorials. I have been modifying new codes for smoother gameplay. I have been searching the web for tutorials. This is one of the codes: PosGameX=MouseX+0; PosGameY=MouseY+0; I have not try this code but, I really think this would not work. The website I have visited, I think, is not good for coding. My gameplay is that when the mouse clicks on a position. It would try to get the co-ordinates(Mouse) and convert it to game co-ordinates. And I really want to know how do I make my mouse clicks to game co-ordinates? FOR MORE INFO: Searches: How Do I translate game co-ordinates? How Do I translate mouse to game co-ordinates? AND PLEASE! Do not give me algebra. I have really forgotten those.

    Read the article

  • Change density of the body dynamically

    - by Siddharth
    In my game, I want to change density of my body object when it collide with other objects. I found something like following to change density but further I could not able to find any hint for this. So someone please help. Fixture fixture = goldenBoxArrayList.get(i) .getGoldenBoxBody() .getFixtureList().get(0); fixture.setDensity(0.5f); After setting fixture data I could not able to set it to the body.

    Read the article

  • Translating multiple objects in GUI based on average position?

    - by user1423893
    I use this method to move a single object in 3D space, it accounts for a local offset based on where the cursor ray hits the widget and the center of the widget. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; objectSelected.Position = p; I would like to move multiple objects based on the same principle and using a widget which is located at the average position of all the objects currently selected. I thought that I would have to translate each object based on their offset from the average point and then include the local offset. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; int numSelectedObjects = objectSelectedList.Count; for (int i = 0; i < numSelectedObjects; ++i) { objectSelectedList[i].Position = (objectSelectedList[i].Position - translationWidget.Position) + p; } This doesn't work as the object starts shaking, which I think is because I haven't accounted for the new offset correctly. Where have I gone wrong?

    Read the article

  • Existent js libs for tileset / map loading and rendering?

    - by ylluminate
    I'm building an rts style overhead tileset game with JavaScript (particularly using Ember.js framework as a base). The map is so large that I'd very much like to be able to load and render the board and layered items in a Google Maps'esque. I'm curious as to whether there are existing libs that would be helpful and already well thought out in these regards vs trying to reinvent the wheel. Are there any such libraries or code examples that would be useful in this area of board / map management?

    Read the article

  • I need advice on creating 3D walk cycles in XNA

    - by Zetar
    I want to purchase a number of 3D models from TurboSquid and animate them in an XNA game. I wrote a lot of games from 1985-1999 and have recently become involved with XNA. Now I would like to port one of my old games to the XBOX. I do have a background in 3D animation; but that was years ago. What is the current method for animating a walk cycle with a 3D model and using it inside XNA? Is there a book, software or a tutorial that you can recommend? Thanks in advance and sorry for such a broad and currently naive question.

    Read the article

  • Unity, Unrealistic Sphere On Inclined Plane

    - by user1086516
    So I am trying to model a ball rolling down an inclined surface in Unity based on what I am observing in real life but it is still quite off. In Unity it takes the ball about 3 seconds to travel from a place to another specified place where in real life it only takes 1 second. The ball isn't as fast to react to the incline as in real life (even though I have tried giving the ball and surface low or zero friction values) The ball does not accelerate as nearly as fast as it does in real life What do I do to give the ball more realistic behavior ? I have tried messing around with mass, physics materials, drag, and angular drag on the ball and surface but it doesn't seem to be helping.

    Read the article

  • Damageable ground similar to pocket tanks or archanists [closed]

    - by XenElement
    Possible Duplicate: Implementing a 2D destructible landscape (like Worms) A really cool feature in both the iPhone game pocket tanks and the online jagex game archanists is ground which can be blown up. When a projectile collides with the ground, an area equal to the blast radius which overlaps the ground is removed. It's strictly two dimensional, but it makes the experience that much more dynamic since you can dig a hole under your opponents or yourself. How is this implemented?

    Read the article

  • Where can I get a list or data base of light reflectance values for different materials?

    - by mikidelux
    I'm implementing lighting for a WebGL app but I'm not an artist so I don't know how to generate or where to obtain a list of materials with its values (diffuse, specular, ambient and shininess). I've been searching a lot but with no luck. Is there any list or DB I might have overlooked? Any common repository or something similar? Thanks in advance. Note: English is not my main language, let me know if you don't understand something and I'll try to rephrase it.

    Read the article

  • C++ Parallel Asynchonous task

    - by Doodlemeat
    I am currently building a randomly generated terrain game where terrain is created automatically around the player. I am experiencing lag when the generated process is active, as I am running quite heavy tasks with post-processing and creating physics bodies. Then I came to mind using a parallel asynchronous task to do the post-processing for me. But I have no idea how I am going to do that. I have searched for C++ std::async but I believe that is not what I want. In the examples I found, a task returned something. I want the task to change objects in the main program. This is what I want: // Main program // Chunks that needs to be processed. // NOTE! These chunks are already generated, but need post-processing only! std::vector<Chunk*> unprocessedChunks; And then my task could look something like this, running like a loop constantly checking if there is chunks to process. // Asynced task if(unprocessedChunks.size() > 0) { processChunk(unprocessedChunks.pop()); } I know it's not far from easy as I wrote it, but it would be a huge help for me if you could push me at the right direction. In Java, I could type something like this: asynced_task = startAsyncTask(new PostProcessTask()); And that task would run until I do this: asynced_task.cancel();

    Read the article

  • How to Use Text in Unity3d

    - by ZiG-ZaG
    How Can i Create Text in Unity3D? I Use "3D Text" But Its Always on Top Of Everything! Can You Suggest Anything? I creating a 2D Game So its not Necessarily a 3D Text.. Edit: Because I Building a 2D Game My Scene is Full of Planes in Front of Camera And I want My Text to be Over One of the Planes and when plane is moving My Text appears behind it. But When I Use "3D Text" Its Always In Front of Everything. Sorry for My Bad English...

    Read the article

  • Confusing Callbacks

    - by SullY
    I'm trying to programm now a "game", and started with the EmptyProject that's provided by the DirectX SDK. The problem is that the Callbacks are confusing me. Can please someone explain me? Edit: DXUTSetCallbackD3D9DeviceAcceptable( IsD3D9DeviceAcceptable ); // not sure but I think that's the caps? DXUTSetCallbackD3D9DeviceLost( OnD3D9LostDevice ); DXUTSetCallbackDeviceChanging( ModifyDeviceSettings ); DXUTSetCallbackFrameMove( OnFrameMove );

    Read the article

  • Size of an image imported with FreeImage

    - by KaiserJohaan
    I'm having abit of a brainfart and I can't quite grasp what I'm doing wrong. It's quite simple, I am importing an image with FreeImage (http://freeimage.sourceforge.net/) which has a method FreeImage_GetBits that returns a pointer to the first byte of the image data. I then try to load all the data into memory using (bitsperpixel / 8) * pixelsWidth ' pixelHeight, like this: uint32_t bitsPerPixel = FreeImage_GetBPP(bitmap); // resolves to 24 uint32_t widthInPixels = FreeImage_GetWidth(bitmap); // resolves to 1024 uint32_t heightInPixels = FreeImage_GetHeight(bitmap); // resolves to 1024 // container is a std::vector<uint8_t> pkgMaterial.mTextureData.insert(pkgMaterial.mTextureData.begin(), FreeImage_GetBits(bitmap), FreeImage_GetBits(bitmap) + ((bitsPerPixel/8) * widthInPixels * heightInPixels)); I have a jpg which is 31 kilobytes in size on disc. Yet when I load it using the above formula, I see the vector is then filled with 3145728 bytes, which is approx 3145 kilobytes. What am I doing wrong?

    Read the article

  • How to move the rigidbody at the position of the mouse on release

    - by Edvin
    I'm making a "Can Knockdown" game and I need the rigidbody to move where the player released the mouse(OnMouseUp). Momentarily the Ball moves OnMouseUp because of rigidbody.AddForce(force * factor); and It moves toward the mousePosition but doesn't end up where the mousePosition is. Here's what I have so far in the script. var factor = 20.0; var minSwipeDistY : float; private var startTime : float; private var startPos : Vector3; function OnMouseDown(){ startTime = Time.time; startPos = Input.mousePosition; startPos.z = transform.position.z - Camera.main.transform.position.z; startPos = Camera.main.ScreenToWorldPoint(startPos); } function OnMouseUp(){ var endPos = Input.mousePosition; endPos.z = transform.position.z - Camera.main.transform.position.z; endPos = Camera.main.ScreenToWorldPoint(endPos); var force = endPos - startPos; force.z = force.magnitude; force /= (Time.time - startTime); rigidbody.AddForce(force * factor); }

    Read the article

  • Number of iterations to real time

    - by Ivansek
    I have an animation of traffic. I have 20 cars in road network, each car have a starting node and end node. Each car know how much distance does it need to travel in order to reach the end node. I move cars each 20 ms for 10 px. To move all cars from their start node to end node I need 60 iterations. That is 60*20ms = 1200ms. Now I want to convert this time, or use data that I have, to a real time where car move 50km/h. How can I do that? Any idea?

    Read the article

  • Really weird GL Behaviour, uniform not "hitting" proper mesh? LibGdx

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • How does process of updating code with Continous Integration work?

    - by BleakCabalist
    I want to draw a model of process of updating the source code with the use of Continous Integration. The main issue is I don't really understand how it works when there are several programmers working on various aspects of the code at the same time. I can't visualize it in my mind. Here's what I know but I might be wrong: New code is sent to repository. Continous Integration server asks Version Control System if there is a new code in repository. If there is than CIS executes tests on the code. If tests show there are problems than CIS orders VCS to revert back to working wersion of the code and communicates it to programmer. If tests are passed positively it compiles the repository code and makes new build of a game? New build is made not after ever single change, but at the end of the day I believe? Are my assumptions above correct? If yes, does it also work when there are several programmers updating repository at once? Is this enough to draw a model of the process in your opinions or did I miss something? Also, what software would I need for above process? Can you guys give examples for CIS software and VCS software and whatever else I need? Does CIS software perform code tests or do I need another tool for that and integrate it with CIS? Is there a repository software?

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >