Search Results

Search found 13878 results on 556 pages for 'field codes'.

Page 343/556 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • 2 year degree plus experience vs 4 year degree

    - by CenterOrbit
    Alright, I have searched around a bit on this site and found two somewhat similar questions: Computer Science Programming Certificate vs. Computer Science Degree? Is it possible/likely to be paid fairly without a college degree? But these do not provide an answer specifically to what I am seeking. I have my 2 year A.A.S. Degree in computer programming, along with a networking certificate from a technical college. I also have been working at a small educational game development company for 3 years now in various positions, but steadily moving up and now as a lead programmer on a few projects. Some of the higher programmers I work with claim that no matter how much experience I develop it still will not mean as much as someone with a 4 year degree. Their argument is that most employers will look over my resume because of the common '4 yr' minimum requirement. I have also heard people state (not as many though) that experience is everything and that an employer would rather have someone that has worked in the field instead of a rookie fresh out of college. I have heard both sides of this argument, but am looking for a general consensus, or more arguments from both sides from the people who have been there, or are there.

    Read the article

  • Fastest way to group units that can see each other?

    - by mac
    In the 2D game I'm working with, the game engine is able to give me, for each unit, the list of other units that are in its view range. I would like to know if there is an established algorithm to sort the units in groups, where each group would be defined by all those units which are "connected" to each other (even through others). An example might help understand the question better (E=enemy, O=own unit). First the data that I would get from the game engine: E1 can see E2, E3, O5 E2 can see E1 E3 can see E1 E4 can see O5 E5 can see O2 E6 can see E7, O9, O1 E7 can see E6 O1 can see E6 O2 can see O5, E5 O5 can see E1, E4, O2 O9 can see E6 Then I should compute the groups as follow: G1 = E1, E2, E3, E4, E5, O2, O5 G2 = O1, O9, E6, E7 It can be safely assumed that there is a transitive property for the field of view: [if A sees B, then B sees A]. Just to clarify: I already wrote a naïve implementation that loops on each row of the game engine info, but from the look of it, it seems a problem general enough for it to have been studied in depth and have various established algorithms (maybe passing through some tree-like structure?). My problem is that I couldn't find a way to describe my problem that returned useful google hits. Thank you in advance for your help!

    Read the article

  • I need some career guidance, please.

    - by user18956
    Hi, I have been a teacher of guitar and music theory for the last ten years or so, and I have decided to get out of it and pursue something involving computers, but I am very confused about it all. I have no training related to programming besides a knowledge of xhtml and css - which I realize are not even programming languages. My problem is that I know I want to do something with either making video games, computer/online applications, or some other programming job, but I haven't a clue how to begin. I picked up a book from the Head First series entitled, Head First Programming that uses Python to teach programming concepts, but after that, I don't really know what is a good direction for me in terms of balancing career satisfaction with job availability and acceptable pay. I am not looking for a huge salary, I just want to be able to survive doing something I love, and which challenges me. I don't know even a single person involved in a related field, so I am in need of guidance. The first thing I would like to know is whether pursuing a career as a programmer for video games is a realistic option. I love video games, and play them all the time, and I have always wanted to make them. If this is an option, what would be the recommended course of action? What is a good language or technology to get involved in for the job market now? I have read that PHP/MySQL is a good place to find a job for some. Can I find a job without school, or do I need to got o college? Also, will the Python I learn in this book translate into any other language I need to learn? If it is anything like music, then I am sure it will, but I don't know much about programming - yet. And last, yet perhaps most important, is thirty years old too old to take such a radical redirection in careers? Thank you for any help you can offer. I really need it.

    Read the article

  • My boss has a different idea of a website's UX [migrated]

    - by NicoJuicy
    Let me explain the situation. I started transforming a "old (.Net 2.0)" Application into a webapplication. Problem here is, that no-one here is really acquainted with the UX of a website (Simple, efficient). Eventhough, i still have to regard that the website can be tailored to a customer needs through parameters (yeah, i know :s ) For example: I wanted to have a layout similar to invoicemachine (= as simple as possible). -- He wants a Ribbon toolbar. Going to a supplier gives the list of supplier -- He wants to display the "Create Supplier" screen where you can use the wildcards in a certain textbox, to search for a specific Supplier and then give the list of the suppliers. Also, i need 4 search/filter mechanisms: people can search per field with wildmarks can filter the suppliers search a keyword through all the data of a supplier filter the "list Suppliers" page by the first letter of the name. LIST Suppliers | A | D | Z Adam Wrincle ADD |EDIT |Delete Damzel InDistress ADD |EDIT |Delete Zorro ADD |EDIT |Delete I can't seem to get through to him, that the UX of a website needs to be differently than a Windows Application. If he wants to bring all the logic of the windows app into a website, why letting me build a website then? Stick to the old solution. Am i mistaking so hard or how could i convince / show him that an online-solution is something different than the offline solution. He already "saw" online solutions of other applications to get an idea, but if i suggest something he won't listen (if it's GUI / UX related, that is).

    Read the article

  • How Can I Know Whether I Am a Good Programmer?

    - by Kristopher Johnson
    Like most people, I think of myself as being a bit above average in my field. I get paid well, I've gotten promotions, and I've never had a real problem getting good references or getting a job. But I've been around enough to notice that many of the worst programmers I've worked with thought they were some of the best. Bad programmers who are surrounded by other bad programmers seem to be the most self-deluded. I'm certainly not perfect. I do make mistakes. I do miss deadlines. But I think I make about the same number of bonehead moves that "other good programmers" do. The problem is that I define "other good programmers" to mean "people who are like me." So, I wonder, is there any way a programmer can make some sort of reasonable self-evaluation? How do we know whether we are good or bad at our jobs? Or, if terms like good and bad are too ill-defined, how can programmers honestly identify their own strengths and weaknesses, so that they can take advantage of the former and work to improve the latter?

    Read the article

  • Which problem(s) do YOU want to see solved?

    - by buu700
    My team and I are meeting tonight to come up with a business plan and some community input would be amazing. I've been mulling over this issue for the past few months and bouncing ideas off of others, and now I'd finally like some input from the community. I have come up with a fair selection of ideas, but most of those amount to either fun projects which could potentially be profitable, or otherwise solid business models that have one or two major hurdles (usually related to resources or legality). For our team meeting tonight, my idea is to take inventory of our available skills, resources, and compelling problems which interest us. The last is where I would greatly appreciate some community input. Hell, even entire business ideas/plans would be appreciated. No matter how big or small your thoughts, any input would be appreciated. We're a team of computer scientists, so our business will be primarily based around software/technology/Web solutions. Among my relevant available resources (entire Internet aside), I have the following: A pretty reliable connection to an SEO company a large production company. A stash of fairly powerful server hardware. A fast network with static IPs. The backend for Hackswipe, which includes credit card payment processing and a Google Voice-based SMS gateway. This work in progress design for something completely unrelated but which is backed by some fairly decent infrastructure. Direct access to the experts in just about any relevant field (on-campus Carnegie Mellon professors). A sexual relationship with the baron of a small nation. For further down the line, some investor relationships. Not likely to be so relevant, but a decent social media presence (Stack Overflow reputation, modship in some major reddits, various tech forums). The source code for Eugene fucking McCabe. Pooled with the other team members, the list of projects we can build off of would be longer (including an Android app). So, what are your thoughts? Crossposted to reddit

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • libgdx intersection problem between rectangle and circle

    - by Chris
    My collision detection in libgdx is somehow buggy. player.png is 20*80px and ball.png 25*25px. Code: @Override public void create() { // ... batch = new SpriteBatch(); playerTex = new Texture(Gdx.files.internal("data/player.png")); ballTex = new Texture(Gdx.files.internal("data/ball.png")); player = new Rectangle(); player.width = 20; player.height = 80; player.x = Gdx.graphics.getWidth() - player.width - 10; player.y = 300; ball = new Circle(); ball.x = Gdx.graphics.getWidth() / 2; ball.y = Gdx.graphics.getHeight() / 2; ball.radius = ballTex.getWidth() / 2; } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); // draw player, ball batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(ballTex, ball.x, ball.y); batch.draw(playerTex, player.x, player.y); batch.end(); // update player position if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.LEFT)) player.x -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.RIGHT)) player.x += 250 * Gdx.graphics.getDeltaTime(); // don't let the player leave the field if(player.y < 0) player.y = 0; if(player.y > 600 - 80) player.y = 600 - 80; // check collision if (Intersector.overlaps(ball, player)) Gdx.app.log("overlaps", "yes"); }

    Read the article

  • ArchBeat Link-o-Rama Top 20 for April 1-9, 2012

    - by Bob Rhubart
    The top 20 most popular items shared via my social networks for the week of April 1 - 8, 2012. Webcast: Oracle Maximum Availability Architecture Best Practices w/Tom Kyte - April 12 Oracle Cloud Conference: dates and locations worldwide Bad Practice Use Case for LOV Performance Implementation in ADF BC | Oracle ACE Director Andresjus Baranovskis How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya MySQL Cluster 7.2 GA Released How to deal with transport level security policy with OSB | Jian Liang Webcast Series: Data Warehousing Best Practices http://bit.ly/I0yUx1 Interactive Webcast and Live Chat: Oracle Enterprise Manager Ops Center 12c Launch - April 12 Is This How the Execs React to Your Recommendations? | Rick Ramsey Unsolicited login with OAM 11g | Chris Johnson Event: OTN Developer Day: MySQL - New York - May 2 OTN Member discounts for April: Save up to 40% on titles from Oracle Press, Pearson, O'Reilly, Apress, and more Get Proactive with Fusion Middleware | Daniel Mortimer How to use the Human WorkFlow Web Services | Oracle ACE Edwin Biemond Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH IOUG Real World Performance Tour, w/Tom Kyte, Andrew Holdsworth, Graham Wood WebLogic Server Performance and Tuning: Part I - Tuning JVM | Gokhan Gungor Crawling a Content Folio | Kyle Hatlestad The Java EE 6 Example - Galleria - Part 1 | Oracle ACE Director Markus Eisele Reminder: JavaOne Call For Papers Closing April 9th, 11:59pm | Arun Gupta Thought for the Day "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." — Leslie Lamport

    Read the article

  • Keeping Aspect Screen Ratio While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Oracle SPARC SuperCluster and US DoD Security guidelines

    - by user12611852
    I've worked in the past to help our government customers understand how best to secure Solaris.  For my customer base that means complying with Security Technical Implementation Guides (STIGs) from the Defense Information Systems Agency (DISA).  I recently worked with a team to apply both the Solaris and Oracle 11gR2 database STIGs to a SPARC SuperCluster.  The results have been published in an Oracle White paper. The SPARC SuperCluster is a highly available, high performance platform that incorporates: SPARC T4-4 servers Exadata Storage Servers and software ZFS Storage appliance InfiniBand interconnect Flash Cache  Oracle Solaris 11 Oracle VM for SPARC Oracle Database 11gR2 It is targeted towards large, mission critical database, middleware and general purpose workloads.  Using the Oracle Solution Center we configured a SSC applied DoD security guidance and confirmed functionality and performance of the system.  The white paper reviews our findings and includes a number of security recommendations.  In addition, customers can contact me for the itemized spreadsheets with our detailed STIG reports. Some notes: There is no DISA STIG  documentation for Solaris 11.  Oracle is working to help DISA create one using their new process. As a result, our report follows the Solaris 10 STIG document and applies it to Solaris 11 where applicable. In my conversations over the years with DISA Field Security Office they have repeatedly told me, "The absence of a DISA written STIG should not prevent a product from being used.  Customer may apply vendor or industry security recommendations to receive accreditation." Thanks to the core team: Kevin Rohan, Gary Jensen and Rich Qualls as well as the staff of the Oracle Solution Center and Glenn Brunette for their help in creating the document.

    Read the article

  • Formatting: Group Multiline Alignment - Added

    - by Petr
    One week ago I have added two new properties for formating PHP code into NetBeans 7.1. In Alignment category there are new properties for Group Multiline Alignment - Assignment and Array Initializer.  The Assignment property influence position of the char '=' in a group of lines with assignments.  Let see the pictures below.  On the left site -  Assignment property is off and on the right site the property is on. As you can see, when the property is set on, then the assignment char '=' is placed after the longest identifier in a group. The group is defined as a number of lines that contains the same type of assignments. End of a group can be empty line, line where is only a comment, different expression, end of a block. This formatting options works for variable assignment, field initialization and constants.  The second new property is for Array Initializer.   Both properties are switched off by default. If you will play with it, please file any problem into our Bugzilla.  

    Read the article

  • Platformer Enemy AI

    - by hayer
    I'm currently developing a platformer shooter. The game is multiplayer and while my net code could use some real work I have put that off for the time, so currently I'm trying to implement the AI. The game is pretty simple; Players run around on a map filled with a X amount of zombies that try to eat their brains, classic and overused I know. Weapons spawn at random intervals around the map. The problem is that the zombies, when they find their pray the have to follow it for some while.. And here is the problem, running the AI navcode seems to take for ever. So here is the ideas I have come up with so far Have the AI update at different intervals with a maximum of Y ms with no updates. Have the zombies assigned to groups of zombies. One is appointed the leader of the group who finds the way to the player - the rest just follows the leader. If the leader dies another one of the zombies in the group is appointed president of the zombie swarm. If there is less than five zombies in a group they try to meet up with other zombies.(Aka they are assigned to a different group and therefor a new leader) Multi-threading option one or two? For navigation I have some kinda navmesh(since the game is not tile-based) that tells the zombies where they can walk etc. If anyone else got some ideas on how to do navigation I would love some input. For LoS(zombie - player) I have split the map into grids. If the players grid is connected to the zombies grid(if I go with option two I would only need to check if leader zombies grid is connected to player, aka less checks) - if they are connected and there is more than 250ms since last check do a raytrace.. This is my first time programming AI so input on any field is appreciated.

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • Application shortcut reappears on restart

    - by Nathan Friesen
    I have an application that I have built a .msi installer for throgh Microsoft Visual Studio 2010. I recently made some updates, including changing the version number and rebuilt the installer with these updates. The installer includes shortcuts on both the desktop and in the Start menu. Running the installer appears to work fine, and both of these shortcuts work. After restarting my computer I've found that the shortcuts are changed to have a Target type of Application (Installs on first use) and the Start In: field is changed to a location that doesn't exist. Once this happens, every time you use that shortcut it tries to install the application again and fails. I have also changed the name of the shortcut that the installer creates. This appears to work, and the shortcut still works after a restart. After the restart, though, the shortcut with the old name that doesn't work also appears on the desktop and in the Start menu. Does anyone have any ideas what I may have set up wrong, or what I need to change to get the shortcuts to be have properly?

    Read the article

  • How can a large, Fortran-based number crunching codebase be modernized?

    - by Dave Mateer
    A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: Other people can maintain it over next 10 year cycle Get faster at tweaking the software Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: Use Python and CUDA from Nvidia Rewrite in a functional language. For example, F# or Haskell Go cloud based and use something like Hadoop and Java Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See Medical Imaging and a guy who has written a Fortran wrapper for CUDA, Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?.

    Read the article

  • [Dear Recruiter] I developed in Mo'Fusion

    - by refuctored
    Forward: Sometimes I really feel like technology recruiters have no experience or knowledge of the field they are recruting for.  A warning to those companies hiring technical recruiters -- ensure that the technical recruiters you hire to fill a position are actually technical.  Here's proof below, where I make up completely ridiculous technologies, but still have interest from the recruiter for an interview. Letter to me: Hello - Your name came up as a possible match for a long term contract Cold Fusion Developer role I have in Bothell, WA.  This role requires you to be onsite in Bothell, WA. This is  a tough role to fill so I was hoping you might have someone you can recommend? Unfortunately no telecommute. Thank you! Sincerly, Mindy Recruiter My response: Mindy -- Wow I'm super-excited that you took the time to contact me about this position!  Let me tell you, you won't be disappointed with my skill set! Firstly, I've been developing in ColdFusion since 1993 before it was owned by Adobe and it was operating under code name, "Hot-Jack".  Recently I started developing under the Domain-View-Driven-Domain-Model (DVDDM), integrating client-side CF on Moobuntu.  Not only do I have a boat load of ColdFusion EXP,  I also have a ton of experience in the open source communities lesser known derivative of CF, Mo'Fusion (MF).  I've also invested thousands of hours of my time learning esoteric programming languages. Look forward to working with you! George And her response: Hi George – just left you a message. Give me a call at your convenience.  The role does require someone to be onsite here.. are you able to relocate yourself? Mindy [Sigh]

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • Jagran Prakashan Increases Staff Productivity by 40%

    - by Michael Snow
    Jagran Prakashan Increases Staff Productivity by 40%, Launches New IT Projects up to 4x Faster, Enables Mobile Service, and Improves Business Agility Oracle Customer: JPL Location:  Uttar Pradesh, India Industry: Media and Entertainment Employees:  10,000 Annual Revenue:  $100 to $500 Million Jagran Prakashan Ltd. (JPL) is one of India's premier media and communications groups with interests spanning print, advertising, event management, and mobile services for weather, cricket scores, and educational activities. It is a major media enterprise, with 300 locations across 15 states. Its impressive stable of print publications includes Dainik Jagran, the world’s most widely read daily newspaper––with a readership of over 55 million––the country’s leading afternoon dailies, and a range of popular local, bilingual, and English language newspapers. JPL was using multiple systems to manage its business processes. Users were resistant to using multiple passwords for various applications, preferring to continue their less efficient, legacy work practices. In addition, there was no single repository for sharing documents across the organization, such as company announcements or project documents. The company relied on e-mail to disseminate up-to-date company information, often missing employees. It was also time-consuming and difficult for managers to track the status of ongoing assignments or projects because collaboration and document sharing was inefficient and ineffective.With diverse businesses and many geographic locations, JPL needed to implement a centralized and user-friendly enterprise portal to improve document sharing and collaboration and increase business agility. The company implemented Oracle WebCenter Portal to create a dynamic, secure, and intuitive self-service enterprise portal to improve the user experience and increase operating efficiency. It improved staff productivity by 40%, accelerated new IT projects by up to 4x, boosted staff morale, and increased business agility.   Increases Staff Productivity by 40%, Launches New Products up to 2x Faster A word from JPL "With Oracle WebCenter Portal, we gained a dynamic, secure, and intuitive self-service enterprise portal that provided an exceptional user experience and enabled us to engage employees in a collaborative environment. It increased IT staff productivity by 40%, delivered new projects up to 4x faster, and enabled mobile service to improve our business agility.” Sarbani Bhatia, Vice President IT, Jagran Prakashahn Ltd Before implementing Oracle WebCenter Portal, JPL stored project-critical information, such as page planning of daily newspaper editions and the launch of new editions or supplements on individual laptops or in the e-mail system. Collaboration between colleagues was limited to physical meetings, telephone discussions, and e-mail. It was difficult to trace and recover important project documents when a staff member resigned, which represented a significant risk to business continuity. Employees were also averse to multiple passwords and resisted using the systems, affecting staff productivity. With Oracle WebCenter Portal, JPL created a dynamic, secure, and intuitive self-service enterprise portal with business activity streams. The portal allowed users to navigate, discover, and access information, such as advertising rates, requisition approvals, ad-hoc queries, and employee surveys from a single entry point with a single password. Managers can also upload important documents, such as new pricing for advertisers or newspaper distributors, and share them through the information and instruction section in the portal. In addition, managers can now easily track and review timelines for projects online rather than gathering information from meetings and e-mails. The company gained the ability to centrally manage information, ensured business continuity, and improved staff productivity by 40%.“In the media industry, news has a very short shelf life, so speed is crucial. Information delayed is like information lost,” said Sarbani Bhatia, vice president IT, Jagran Prakashahn Ltd. “Thanks to Oracle WebCenter Portal’s contextual collaboration tools, we can provide and share feedback for new project launches, such as career or education supplements, up to 2x faster through discussion forums or knowledge groups. Tasks that previously required four months, we now complete in one month.”In addition, the company can broadcast announcements, flash employee birthdays, and promote important events through the message section on the webpage, instead of using the e-mail system. The company can also conduct opinion polls to gauge employee response to organizational issues and improve management decision-making.“With over 10,000 employees across 300 locations, it is critical for management to hear the voice of employees and develop a cohesive organizational culture. Oracle WebCenter Portal enables employees to engage with business processes and systems in a collaborative environment, providing users with an exceptional experience,” Bhatia said. Enables Mobility Access and Increases Business Agility Newspaper advertisements generate the majority of JPL’s revenue. With most sales staff on the move, the company needed to ensure timely approval of print advertisement discounts for specific clients and meet tight publication deadlines.  By integrating Oracle WebCenter Portal seamlessly with its enterprise resource planning (ERP) system and other applications, such as the organizational mass mailing system, business intelligence, and management information system, JPL embedded its approval workflow processes into the enterprise portal and provided users with an integrated and intuitive interface. About 30% of JPL’s sales staff members now have tablets and receive advertising discount approval from managers while in the field and no longer need to return to the office, which has significantly improved efficiency and increased business agility.“Application mobility was critical for sales representatives in the field to meet stringent auditing requirements for online accountability, particularly for our newspaper advertising business. Staff member satisfaction has improved significantly now that the sales team can use tablets to access the portal––a capability we will extend to smart phones in the second stage of the implementation,” Bhatia said. Accelerates Application Development by up to 4x and Cuts Costs by up to 60% With Oracle WebCenter Portal, users can easily create, modify, and upload information to their personalized webpages without IT assistance. By seamlessly integrating Oracle WebCenter Portal with the payroll database, managers can decide which members of their team can access the page and with whom they will share information, a decision based on role or geographical location. A sales representative selling advertising space for a local language daily newspaper, for example, can upload an updated advertising rate relevant only to that particular publication. Users can also easily adapt to the new platform, thanks to its intuitive design and look, reducing the need for training and lowering resistance to using the system.Using Oracle WebCenter Portal’s out-of-the-box reusable components, such as portal pages and templates, provided JPL’s developers with a comprehensive and flexible user experience platform and increased the speed of application development. In less than five months, JPL developed more than 55 workflows. The IT team accelerated deployment of new applications by up to 4x, as they do not need to install them on individual machines now that they have a web-based environment.   “Previously, we would have spent a whole day deploying a new application for each department or location. With a browser-based environment, we have cut costs by up to 60% by reducing deployment time to zero, because our IT team can roll out a new application from a single point, thanks to Oracle WebCenter Portal,” Bhatia said. Challenges Provide a dynamic, secure, and intuitive self-service enterprise portal to improve staff productivity and ensure business continuity Enable seamless integration with multiple enterprise applications to improve workflow efficiency—including approval of print advertisement discounts—and increase business agility Improve engagement with employees and enable collaboration to enhance management decision-making Accelerate time-to-market for new services, such as new advertising programs Solutions Oracle Product and ServicesOracle WebCenter Portal 11g Increased staff productivity by 40% and enhanced user satisfaction by enabling employees to easily navigate, discover, and access information from a single, self-service enterprise portal without IT assistance Launched new products, such as career or education supplements, up to 2x faster by enabling peer collaboration and incorporating feedback generated through discussion forums, thanks to Oracle WebCenter Portal’s out-of-the-box collaboration tools Accelerated application development up to 4x by enabling developers to optimize reusable components for managing and deploying new applications in a browser-based environment rather than spending one day to install applications for each department, cutting costs by up to 60% Ensured business continuity by enabling managers to easily track and review project timelines online rather than storing important documents on individual laptops or relying on the e-mail system Increased business agility and operational efficiency by seamlessly integrating with the in-house, ERP system and embedding business processes into a single portal Boosted company revenue by enabling sales team members to submit print-advertising discount requests through mobile devices instead of waiting to return to office, ensuring timely approval from managers to meet tight publication deadlines Improved management decision-making by enabling employees to easily share and access feedback through opinion polls or forums, boosting staff morale Introduced the single sign-on capability and enhanced security by enabling managers to decide access level for staff members based on role or geographical location Reduced the need for staff training and minimized user resistance to systems by providing a dynamic and intuitive user experience Why Oracle JPL did not consider other products because the company was already using Oracle Database, Enterprise Edition with Real Application Clusters and had a positive experience with Oracle. JPL chose Oracle WebCenter Portal to ensure no compatibility issues for integration with its existing Oracle products and to take advantage of the experience and support of a reputable vendor to ensure business continuity. “We chose Oracle because we knew we could rely on its support and experience. In addition, Oracle WebCenter Portal’s speed, agility, and mobile access features were a perfect fit for our business requirements,” Bhatia said. Implementation Process JPL launched the enterprise portal to 500 users in the first phase of the project, and plans to extend this to 2,000 users when the portal is fully launched. Oracle partner PricewaterhouseCoopers used Oracle Application Development Framework for the intial set-up, user training and to develop and design sample workflows. JPL’s internal IT staff then took charge of the implementation, bringing it to completion on budget. Partner Oracle PartnerPricewaterhouseCoopers (India)

    Read the article

  • "Don't do programming after a few years of starting career" Is this a fair advice?

    - by Muhammad Yasir
    I am a little experienced developer having around 5 years experience in PHP and somewhat less in Java, C# and trying to learn some Python now a days. Since the start of my career as a programmer I have been told every now and then by fellow programmers that programming is suitable for a few early years of carrier (most of them take it as 5 years) and that one must change the direction after it. The reason they present is that headaches and pressures associated with programming. They also say that programmers are less social and don't usually like to give time to their families etc. and specially "Oh come on, you can not do programming in your entire life!" I am somewhat confused here and need to ask others about it. If I leave programming then what do I do?! I guess teaching may be a good option in this case but it will require to first earn a PhD degree perhaps. It may also be noteworthy that in my country (Pakistan) the life of a programmer is not very good in that normally they must give 2-3 extra hours in office to accomplish urgent programming tasks. I have a sense that situation is somewhat similar in other countries and regions as well. So the question is, do you think it is a fair advice to change career from programming to something else after spending 5 years in this field? Thanks for sharing thoughts!

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • DI and hypothetical readonly setters in C#

    - by Luis Ferrao
    Sometimes I would like to declare a property like this: public string Name { get; readonly set; } I am wondering if anyone sees a reason why such a syntax shouldn't exist. I believe that because it is a subset of "get; private set;", it could only make code more robust. My feeling is that such setters would be extremely DI friendly, but of course I'm more interested in hearing your opinions than my own, so what do you think? I am aware of 'public readonly' fields, but those are not interface friendly so I don't even consider them. That said, I don't mind if you bring them up into the discussion Edit I realize reading the comments that perhaps my idea is a little confusing. The ultimate purpose of this new syntax would be to have an automatic property syntax that specifies that the backing private field should be readonly. Basically declaring a property using my hypothetical syntax public string Name { get; readonly set; } would be interpreted by C# as: private readonly string name; public string Name { get { return this.name; } } And the reason I say this would be DI friendly is because when we rely heavily on constructor injection, I believe it is good practice to declare our constructor injected fields as readonly.

    Read the article

  • How can I make an MMORPG appeal to casual players?

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

  • Subsumption architecture vs. perceptual control theory

    - by Yasir G.
    I'm a new person to AI field and I have to research and compare 2 different architectures for a thesis I'm writing. Before you scream (homework thread), I've been reading on these 2 topics only to find that I'm confusing myself more.. let me first start with stating briefly what I know so far. Subsumption is based on the fact that targets of a system are different in sophistication, thus that requires them to be added as layers, each layer can suppress (modify) the command of the layers below it, and there are inhibitors to stop signals from execution lets say. PCT stresses on the fact that there are nodes to handle environmental changes (negative feedback), so the inputs coming from an environment go through a comparator node and then an action is generated by that node, HPCT or (Hierarchical PCT) is based on nesting these cycles inside each other so a small cycle to avoid crashing would be nested in a more sophisticated cycle that targets a certain location for example. My questions, am I getting this the right way? am I missing any critical understanding about these 2 models? also any idea where I can find simplified explanations for each theory (so far been struggling trying to understand the papers from Google scholar :< ) /Y

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >