Search Results

Search found 21194 results on 848 pages for 'game state'.

Page 294/848 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • 2D Array of 2D Arrays (C# / XNA) [on hold]

    - by Lemoncreme
    I want to create a 2D array that contains many other 2D arrays. The problem is I'm not quite sure what I'm doing but this is the initialization code I have: int[,][,] chunk = new int[64, 64][32, 32]; For some reason Visual Studio doesn't like this and says that it's and 'invalid rank specifier'. Also, I'm not sure how to use the nested arrays once I've declared them... Some help and some insight, please?

    Read the article

  • Control diamond square algorithm to generate islands/pangea.

    - by Gabriel A. Zorrilla
    I generated a height map with the diamond square algorithm. The thing is i do not manage to create islands, this is, restrict the height other than water level range to a certain value in the center of the map. I manualy seeded a circle in the middle of the map but the rest of the map still receives heights over the water level. I dont fully understand the Perlin noise algorithm so i'd like to work with my current implementation of the diamond square algorithm which took me 3 days to interpret and code in PHP. :P

    Read the article

  • Non-perfect maze generation algorithm

    - by Shylux
    I want to generate a maze with the following properties: The maze is non-perfect. Means it has loops and multiple ways to reach the exit. The maze should be random. The algorithm should output different mazes for different input parameters The maze doesn't have to be braided. Means dead-ends are allowed and appreciated. I just can't find the right resources on google. The closest i found was this description of the different types of algorithms: http://www.astrolog.org/labyrnth/algrithm.htm. All other algorithms were for perfect mazes. Can anyone give me a website where i can look this up or maybe an algorithm directly?

    Read the article

  • Spritebatch not working in winforms

    - by CodingMadeEasy
    I'm using the Winforms sample on the app hub and everything is working fine except my spritebatch won't draw anything unless I call Invalidate in the Draw method. I have this in my initialize method: Application.Idle += delegate { Invalidate(); }; I used a breakpoint and it is indeed invalidating my program and it is calling my draw method. I get no errors with the spritebatch and all the textures are loaded I just don't see anything on the screen. Here's the code I have: protected override void Draw() { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); tileSheet.Draw(spriteBatch); foreach (Image img in selector) img.Draw(spriteBatch); spriteBatch.End(); } But when I do this: protected override void Draw() { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); tileSheet.Draw(spriteBatch); foreach (Image img in selector) img.Draw(spriteBatch); spriteBatch.End(); Invalidate(); } then all of a sudden the drawing starts to work! but the problem is that it freezes everything else and only that control gets updated. What can I do to fix this? It's really frustrating.

    Read the article

  • DirectX11 Swap Chain RGBA vs BGRA Format

    - by Nathan
    I was wondering if anyone could elaborate any further on something that's been bugging me. In DirectX9 the main supported back buffer formats were D3DFMT_X8R8B8G8 and D3DFMT_A8R8G8B8 (Both being BGRA in layout). http://msdn.microsoft.com/en-us/library/windows/desktop/bb174314(v=vs.85).aspx With the initial version of DirectX10 their was no support for BGRA and all the textbooks and online tutorials recommend DXGI_FORMAT_R8G8B8A8_UNORM (being RGBA in layout). Now with DirectX11 BGRA is supported again and it seems as if microsoft recommends using a BGRA format as the back buffer format. http://msdn.microsoft.com/en-us/library/windows/apps/hh465096.aspx Are there any suggestions or are there performance implications of using one or the other? (I assume not as obviously by specifying the format of the underlying resource the runtime will handle what bits your passing through and than infer how to utilise them based on the format.)

    Read the article

  • Z axis in isometric tilemap

    - by gyhgowvi
    I'm experimenting with isometric tile maps in JavaScript and HTML5 Canvas. I'm storing tile map data in JavaScript 2D array. // 0 - Grass // 1 - Dirt // ... var mapData = [ [0, 0, 0, 0, 0], [0, 0, 1, 0, 0, ... ] and draw for(var i = 0; i < mapData.length; i++) { for(var j = 0; j < mapData[i].length; j++) { var p = iso2screen(i, j, 0); // X, Y, Z context.drawImage(tileArray[mapData[i][j]], p.x, p.y); } } but this function mean's all tile Z axis is equal to zero. var p = iso2screen(i, j, 0); Maybe anyone have idea and how to do something like mapData[0][0] Z axis equal to 3 mapData[5][5] Z axis equal to 5? I have idea: Write function for grass, dirt and store this function to 2D array and draw and later mapData[0][0].setZ(3); But it is good idea to write functions for each tiles?

    Read the article

  • How to implement RLE into a tilemap?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • Polygonal Triangulation - algorithm with O(n log n) complexity

    - by Arthur Wulf White
    I wish to triangulate a polygon I only have the outline of (p0, p1, p2 ... pn) like described in this question: polygon triangulation algorithm and this webpage: http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/97/Ian/algorithm2.html I do not wish to learn the subject and have a deep understanding of it at the moment. I only want to see an effective algorithm that can be used out of the box. The one described in the site seems to be of somewhat high complexity O(n) for finding one ear. I heard this could be done in O(n log n) time. Is there any well known easy to use algorithm that I can translate port to use in my engine that runs with somewhat reasonable complexity? The reason I need to triangulate is that I wish to feel out a 2d-outline and render it 3d. Much like we fill out a 2d-outline in paint. I could use sprites. This would not serve cause I am planning to play with the resulting model on the z-axis, giving it different heights in the different areas. I would love to try the books that were mentioned, although I suspect that is not the answer most readers are hoping for when they read this Q & A format. Mostly I like to see a code snippet I can cut and paste with some modifications and start running.

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • Problem converting FBX file into XNB

    - by Dado
    I create a Monogame Content Project to convert assets into XNB. For FBX file without texture there is no problem: the file is correctly converted and when I load XNB into my project everything is ok. The problem occours when i have associated to fbx file a texture map: in this case both FBX and PNG files are converted to XNB but when i try to load these XNB files into my project the following problem occours: "ContentLoadException: Could not load Models/maze1 asset as a non-content file!" Note: maze1 is the XNB file that was converted from FBX. How can I solve this problem? Thank you in advance

    Read the article

  • libgdx setOrigin and setPosition not working as expected?

    - by shino
    I create a camera: camera = new OrthographicCamera(5.0f, 5.0f * h/w); Create a sprite: ballTexture = new Texture(Gdx.files.internal("data/ball.png")); ballTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear); TextureRegion region = new TextureRegion(ballTexture, 0, 0, ballTexture.getWidth(), ballTexture.getHeight()); ball = new Sprite(region); Set the origin, size, and position: ball.setOrigin(ball.getWidth()/2,ball.getHeight()/2); ball.setSize(0.5f, 0.5f * ball.getHeight()/ball.getWidth()); ball.setPosition(0.0f, 0.0f); Then render it: batch.setProjectionMatrix(camera.combined); batch.begin(); ball.draw(batch); batch.end(); But when I render it, the bottom left of my ball sprite is at (0, 0), not the center of it, as I would expect it to be because I set the origin to the center of the sprite. What am I missing?

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • Three.js Collada import animation not working

    - by Peter Vasilev
    I've been trying to export a Collada animated model to three js. Here is the model: http://bayesianconspiracy.com/files/model.dae It is imported properly(I can see the model) but I can't get it to animate. I've been using the two Collada examples that come with Three js. I've tried just replacing the path with the path to my model but it doesn't work. I've also tried tweaking some stuff but to no avail. When the model is loaded I've checked the 'object.animations' object which seems to be loaded fine(can't tell for sure but there is lots of stuff in it). I've also tried the Three.js editor: http://threejs.org/editor/ which loads the model properly again but I can't play the animation : ( I am using Three JS r62 and Blender 2.68. Any help appreciated!!

    Read the article

  • working in external actionscript file does not show anything on the screen?

    - by XNA
    I'm writing this code in Flash builder and I tested the file in flash, but nothing appears in the swf file. (no text in the screen show , i don't know why) Is there any missing property in the code? Also, when I create text or movie clip with flash tools on the stage and give it an instance name, flash builder doesn't seem to recognize it in the action script code. package { import flash.display.MovieClip; import flash.text.TextField; public class mark extends MovieClip { public function mark() { super(); public var d:TextField=new TextField(); d.text="Hello world"; d.x=250; d.y=300; addChild(d); } }

    Read the article

  • Fragment shader seems to floor() imprecisely

    - by Peter K.
    I'm trying to interpolate coordinates in my fragment shader. Unfortunately if close to the upper edge the interpolated value of fVertexInteger seems to be rounded up instead of beeing floored. This happens above approximately fVertexInteger >= x.97. Example: floor(64.7) returns 64.0 -- correct floor(64.98) returns 65.0 -- incorrect The same happens on ceiling close above x.0, where ceil(65.02) returns 65.0 instead of 66.0. Q: Any ideas how to solve this? Note: GL ES 2.0 with GLSL 1.0 highp floats are not supported in fragment shaders on my hardware flat varying hasn't been a solution, because I'm drawing TRIANGLE_STRIP and can't redeclare the provoking vertex (only OpenGL 3.2+) Fragment Shader: varying float fVertexInteger; varying float fVertexFraction; void main() { // Fix vertex integer fixedVertexInteger = floor(fVertexInteger); // Fragment color gl_FragColor = vec4( fixedVertexInteger / 65025.0, fract(fixedVertexInteger / 255.0), fVertexFraction, 1.0 ); }

    Read the article

  • Objective-C or C++ for iOS games?

    - by Martin Wickman
    I'm pretty confident programming in Objective-C and C++, but I find Objective-C to be somewhat easier to use and more flexible and dynamic in nature. What would be the pros and cons when using C++ instead of Obj-C for writing games in iOS? Or rather, are there any known problems with using Obj-C as compared to C++? For instance, I suspect there might be performance issues with Obj-C compared to code written in C/C++.

    Read the article

  • How do I draw an OpenGL point sprite using libgdx for Android?

    - by nbolton
    Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • Quick 2D sight area calculation algorithm?

    - by Rogach
    I have a matrix of tiles, on some of that tiles there are objects. I want to calculate which tiles are visible to player, and which are not, and I need to do it quite efficiently (so it would compute fast enough even when I have a big matrices (100x100) and lots of objects). I tried to do it with Besenham's algorithm, but it was slow. Also, it gave me some errors: ----XXX- ----X**- ----XXX- -@------ -@------ -@------ ----XXX- ----X**- ----XXX- (raw version) (Besenham) (correct, since tunnel walls are still visible at distance) (@ is the player, X is obstacle, * is invisible, - is visible) I'm sure this can be done - after all, we have NetHack, Zangband, and they all dealt with this problem somehow :) What algorithm can you recommend for this? EDIT: Definition of visible (in my opinion): tile is visible when at least a part (e.g. corner) of the tile can be connected to center of player tile with a straight line which does not intersect any of obstacles.

    Read the article

  • Doubt about texture waves in CG Ocean Shader

    - by Alexandre
    I'm new on graphical programming, and I'm having some trouble understanding the Ocean Shader described on "Effective Water Simulation from Physical Models" from GPU Gems. The source code associated to this article is here. My problem has been to understand the concept of texture waves. First of all, what is achieved by texture waves? I'm having a hard time trying to figure out it's usefulness. In the section 1.2.4 of the article, it does say that the waves summed into the texture have the same parametrization as the waves used for vertex positioning. Does it mean that I can't use the texture provided by the source code if I change the parameters of the waves, or add more waves to sum? And in the section 1.4.1, is said that we can assume that there is no rotation between texture space and world space if the texture coordinates for our normal map are implicit. What does mean that the "normal map are implicit'? And why do I need a rotation between texture and world spaces if the normal map are not implicit? I would be very grateful for any help on this.

    Read the article

  • Opengl glVertexAttrib4fv doesn't work?

    - by Naor
    This is my vertex shader: static const GLchar * vertex_shader_source[] = { "#version 430 core \n" "layout (location = 0) in vec4 offset; \n" "void main(void) \n" "{ \n" " const vec4 vertices[3] = vec4[3](vec4( 0.25, -0.25, 0.5, 1.0),\n" " vec4(-0.25, -0.25, 0.5, 1.0), \n" " vec4( 0.25, 0.25, 0.5, 1.0)); \n" " gl_Position = vertices[gl_VertexID] + offset; \n" "} \n" }; and this is what im trying to do: glUseProgram(rendering_program); GLfloat attrib[] = { (float)sin(currentTime) * 0.5f, (float)cos(currentTime) * 0.6f, 0.0f, 0.0f }; glVertexAttrib4fv(0, attrib); glDrawArrays(GL_TRIANGLES, 0, 3); currentTime - The number in seconds since the program has started. Expected result - Triangle moving around the window. Its from the SuperBible book (sixth edition), this is the full code:http://pastebin.com/xA3eCKz1 The triangle should move across the screen but it doesn't.

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • Blender mesh mirroring screws up normals when importing in Unity

    - by Shivan Dragon
    My issue is as follows: I've modeled a robot in Blender 2.6. It's a mech-like biped or if you prefer, it kindda looks like a chicken. Since it's symmetrical on the XZ plane, I've decided to mirror some of its parts instead of re-modeling them. Problem is, those mirrored meshes look fine in Blender (faces all show up properly and light falls on them as it should) but in Unity faces and lighting on those very same mirrored meshes is wrong. What also stumps me is the fact that even if I flip normals in Blender, I still get bad results in Unity for those meshes (though now I get different bad results than before). Here's the details: Here's a Blender screen shot of the robot. I've took 2 pictures and slightly rotated the camera around so the geometry in question can be clearly seen: Now, the selected cog-wheel-like piece is the mirrored mesh obtained from mirroring the other cog-wheel on the other (far) side of the robot torso. The back-face culling is turned of here, so it's actually showing the faces as dictated by their normals. As you can see it looks ok, faces are orientated correctly and light falls on it ok (as it does on the original cog-wheel from which it was mirrored). Now if I export this as fbx using the following settings: and then import it into Unity, it looks all screwy: It looks like the normals are in the wrong direction. This is already very strange, because, while in Blender, the original cog-wheel and its mirrored counter part both had normals facing one way, when importing this in Unity, the original cog-wheel still looks ok (like in Blender) but the mirrored one now has normals inverted. First thing I've tried is to go "ok, so I'll flip normals in Blender for the mirrored cog-wheel and then it'll display ok in Unity and that's that". So I went back to Blender, flipped the normals on that mesh, so now it looks bad in Blender: and then re-exported as fbx with the same settings as before, and re-imported into Unity. Sure enough the cog-wheel now looks ok in Unity, in the sense where the faces show up properly, but if you look closely you'll notice that light and shadows are now wrong: Now in Unity, even though the light comes from the back of the robot, the cog-wheel in question acts as if light was coming from some-where else, its faces which should be in shadow are lit up, and those that should be lit up are dark. Here's some things I've tried and which didn't do anything: in Blender I tried mirroring the mesh in 2 ways: first by using the scale to -1 trick, then by using the mirroring tool (select mesh, hit crtl-m, select mirror axis), both ways yield the exact same result in Unity I've tried playing around with the prefab import settings like "normals: import/calculate", "tangents: import/calculate" I've also tired not exporting as fbx manually from Blender, but just dropping the .blend file in the assets folder inside the Unity project So, my question is: is there a way to actually mirror a mesh in Blender and then have it imported in Unity so that it displays properly (as it does in Blender)? If yes, how? Thank you, and please excuse the TL;DR style.

    Read the article

  • Determine arc-length of a Catmull-Rom spline

    - by Wouter
    I have a path that is defined by a concatenation of Catmull-Rom splines. I use the static method Vector2.CatmullRom in XNA that allows for interpolation between points with a value going from 0 to 1. Not every spline in this path has the same length. This causes speed differences if I let the weight go at a constant speed for every spline while proceeding along the path. I can remedy this by letting the speed of the weight be dependent on the length of the spline. How can I determine the length of such a spline? Should I just approximate by cutting the spline into 10 straight lines and sum their lengths? I'm using this for dynamic texture mapping on a generated mesh defined by splines.

    Read the article

  • OpenGL 2 on Android: native window

    - by ThreaderSlash
    According to OGLES specification, we have the following definition: EGLSurface eglCreateWindowSurface(EGLDisplay display, EGLConfig config, NativeWindowType native_window, EGLint const * attrib_list) More details, here: http://www.khronos.org/opengles/documentation/opengles1_0/html/eglCreateWindowSurface.html And also by definition: int32_t ANativeWindow_setBuffersGeometry(ANativeWindow* window, int32_t width, int32_t height, int32_t format); More details, here: http://mobilepearls.com/labs/native-android-api I am running Android Native App on OGLES 2 and debugging it in a Samsung Nexus device. For setting up the 3D scene graph environment, the following variables are defined: struct android_app { ... ANativeWindow* window; }; android_app* mApplication; ... mApplication=&pApplication; And to initialize the App, we run the commands in the code: ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); Funny to say is that, the command ANativeWindow_setBuffersGeometry behaves as expected and works fine according to its definition, accepting all the parameters sent to it. But the eglCreateWindowSurface does no accept the parameter mApplication-window, as it should accept according to its definition. Instead, it looks for the following input: EGLNativeWindowType hWnd; mSurface = eglCreateWindowSurface(mDisplay,lConfig,hWnd,NULL); As an alternative, I considered to use instead: NativeWindowType hWnd=android_createDisplaySurface(); But debugger says: Function 'android_createDisplaySurface' could not be resolved Can someone tell if there is a way to convert mApplication-window? In a way that the data from the android_app get accepted to the window surface?

    Read the article

  • Pygame surface rotation, rect rotation or sprite rotation?

    - by Alan
    i seem to have a conceptual misunderstanding of the surface and rect object in pygame. I currently observe these objects this way: Surface Just the loaded image rect the 'hard' representation of the ingame object (sprite). Used for simplifying object moment and collision detection sprite rect and surface grouped together What i want to do is rotate my sprite. The only available method i found for rotation is pygame.transform.rotate. How do i rotate the rectangle, or even better, the whole sprite? Below is the image of how i visualize this problem.

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >