Search Results

Search found 2017 results on 81 pages for 'stage'.

Page 54/81 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • One Year Oracle SocialChat - The Movie

    - by mprove
    Tweet | Like | Watch on Vimeo You’ve just watched – hopefully – my first short movie. Thank you! Here is a bit of the back stage story. About 6 weeks ago colleagues from SNBC (Social Network and Business Collaboration) announced a Social Use Case Competition. It was expected to submit a video of 2 to 5 minutes duration on the Social Enterprise (our internal phrase for Enterprise 2.0). Hmm – I had a few vague ideas, but no script – no actors – no experience in film making. Really the best conditions to try something! I chose our weekly SocialChats as my main topic. But if you don’t do Danish Dogma cinema, you still need a script. Hence I played around with the SocialChat’s archive, and all of a sudden a script and even the actors appeared in front of me. The words that you have just seen are weekly topics. Slightly abridged and rearranged to form a story. Exciting, next phase. How to get it on digital celluloid? I have to confess I am still impressed by epic. (Keep in mind, epic was done in 2004.) And my actors – words – call for a typographic style already. The main part was done over a weekend with Apple Keynote. And I even found a wonderful matching soundtrack among my albums: Didge Goes World by Delago. I picked parts of Second Day and Seventh Day. Literally, the rhythm was set, and I "just" had to complete the movie. Tools used – apart from trial and error: Keynote, Pixelmator, GarageBand, iMovie. Finally I want to mention that I am extremely thankful to BSC Music for granting permissions to use the tracks for this short film! Without this sound it would have been just an ordinary slide show. – Internal note: The next SocialChat is on Death by PowerPoint vs. Presentation Zen. CU this Friday 3pm Greenwich / 7am Pacific.

    Read the article

  • MatheMagics - Guess My Age Method 1

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagic – Guess My Age – Method 1 The Mathemagician stands on the stage and asks an adult to do the following: ·         Do the next few steps on your calculator, or the calculator in your phone, or even on a piece of paper. ·         Do it silently! Don’t tell me the results until I ask for them directly ·         Compute a single digit multiple of 9 – any one of 9, 18, 27, … all the way to 81, will do. ·         Now multiply your age by 10 ·         Subtract the 9 multiple from this number. ·         Tell me the result. Notice that I don’t know which multiple of 9 you subtracted from 10 times your age. I will nonetheless immediately tell you what your age is. How do I do this? Let’s do the algebra. 10X – 9Y = 10X – 10Y + Y = 10(X – Y) + Y Now remember, you asked an adult, so his/her age is a two digit number (maybe even 3 digits), thus reducing it by the single digit multiplied by nine is still positive – the lowest is can be is 100 – 81 which yields 19. Now make two numbers out of the result. The last digit and the number before it. This number is X – Y or the age minus the single digit you selected. The last digit is this very single digit. This is always so regardless of the digit you selected. So… Add tis digit to the other number and you get back the age! Q.E.D Example: I am 76 years old and here is what happens when I do the steps 76 x 10 = 760 760 – 18 = 742 made of 74 and 2. My age is 74 + 2 760 – 81 = 679 made of 67 and 9. My age is 67 + 9 A note to the socially aware mathemagician – it is safer to do it with a man. The chances of a veracious answer are much, much higher! The trick may be accomplished on any 2 or 3 digit number, not just one’s age, but if you want to know your date’s age, it’s a good way to elicit it. That’s All Folks PS for more Ageless “Age” mathemagics go to www.mgsltns.com/games.htm and also here: http://geekswithblogs.net/PointsToShare/archive/2011/11/15/mathemagics---guess-my-age---method-2.aspx

    Read the article

  • Inline template efficiency

    - by Darryl Gove
    I like inline templates, and use them quite extensively. Whenever I write code with them I'm always careful to check the disassembly to see that the resulting output is efficient. Here's a potential cause of inefficiency. Suppose we want to use the mis-named Leading Zero Detect (LZD) instruction on T4 (this instruction does a count of the number of leading zero bits in an integer register - so it should really be called leading zero count). So we put together an inline template called lzd.il looking like: .inline lzd lzd %o0,%o0 .end And we throw together some code that uses it: int lzd(int); int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } We compile the code with some amount of optimisation, and look at the resulting code: $ cc -O -xtarget=T4 -S lzd.c lzd.il $ more lzd.s .L77000018: /* 0x001c 11 */ lzd %o0,%o0 /* 0x0020 9 */ ld [%i1],%i3 /* 0x0024 11 */ st %o0,[%i2] /* 0x0028 9 */ add %i3,1,%i0 /* 0x002c */ cmp %i0,999 /* 0x0030 */ ble,pt %icc,.L77000018 /* 0x0034 */ st %i0,[%i1] What is surprising is that we're seeing a number of loads and stores in the code. Everything could be held in registers, so why is this happening? The problem is that the code is only inlined at the code generation stage - when the actual instructions are generated. Earlier compiler phases see a function call. The called functions can do all kinds of nastiness to global variables (like 'a' in this code) so we need to load them from memory after the function call, and store them to memory before the function call. Fortunately we can use a #pragma directive to tell the compiler that the routine lzd() has no side effects - meaning that it does not read or write to memory. The directive to do that is #pragma no_side_effect(<routine name), and it needs to be placed after the declaration of the function. The new code looks like: int lzd(int); #pragma no_side_effect(lzd) int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } Now the loop looks much neater: /* 0x0014 10 */ add %i1,1,%i1 ! 11 ! { ! 12 ! c=lzd(c); /* 0x0018 12 */ lzd %o0,%o0 /* 0x001c 10 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000018 /* 0x0024 */ nop

    Read the article

  • Dadaism and Agility

    - by alexhildyard
    We all have our little bugbears, and something that has given me particular pause over the years is the place of Agility in the software development life cycle. While I have seen it used successfully on both small and Enterprise-level projects, I have also seen many instances in which long-standing technical debt has also originated under its watch. Ironically the problem in such cases seems to me not that the practitioners in question have failed to follow due process (Test, Develop, Refactor -- a common "what" of Agile), but basically that they have missed the point (the "why" of Agile). It's probably a sign of my age that I'm much more interested in the "why" than the "what", since I feel that the latter falls out naturally from the former, but that this is not a reciprocal relationship.Consider Dadaism, precursor to the Surrealist movement in the early part of the twentieth century. Anyone could stand up and proclaim he or she was Dada; anyone could write cut-ups, or pull words out a hat, or produce gibberish on duelling typewriters under the inspiration of Dada. And all that took place at such performances was a manifestation of Dada, and all the artefacts that resulted were also Dada. Hence one commentator's engimatic observation that 'when one speaks of Dada, then one speaks of Dada. But when one does not speak of Dada, one still speaks of Dada.'What is Dada? Literally, Dada is what you say it is. But that's also missing the point. Dada is about erecting a framework within which utterances like this are valid; Dada is about preparing a stage for itself. Dadaism exemplifies the purity of a process-driven ideology -- in fact an ideology that is almost pure process, with nothing extraneous in the way of formal method, and while perhaps Agile delivery should not embrace the liberties of Dadaism too literally, some of the similarities nevertheless are salutary.Agile -- like Dada -- is an attitude; it is about *being* agile; it is not really about doing a specific set of things that are somehow *part* of being Agile. It is an abstract base rather than an implementation, a characteristic rather than a factor. It is the pragmatic response to the need for change in the face of partial information, ephemeral requirements and a healthy dose of systematic uncertainty. In practice this will usually mean repeatedly making the smallest useful changes to a system, recognising that systems evolve, and that all change carries risk. It will usually mean that instead of investing effort in future-proofing a system against a known technology roadmap, one instead invests one's energies in the daily repetition and incremental development of processes best designed to accommodate change quickly. But though it may mean these things in practice, it isn't actually *about* either of these things; it's about the mindset, the attitude that conceives of such responses as sensible solutions given the larger and ultimately unclassifiable thing that constitutes the development lifecycle of a specific project.

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • How do I maintain a really poorly written code base?

    - by onlineapplab.com
    Recently I got hired to work on existing web application because of NDA I'm not at liberty to disclose any details but this application is working online in sort of a beta testing stage before official launch. We have a few hundred users right now but this number is supposed to significantly increase after official launch. The application is written in PHP (but it is irrelevant to my question) and is running on a dual xeon processor standalone server with severe performance problems. I have seen a lot of bad PHP code but this really sets new standards, especially knowing how much time and money was invested in developing it. it is as badly coded as possible there is PHP, HTML, SQL mixed together and code is repeated whenever it is necessary (especially SQL queries). there are not any functions used, not mentioning any OOP there are four versions of the app (desktop, iPhone, Android + other mobile) each version has pretty much the same functionality but was created by copying the whole code base, so now there are some differences between each version and it is really hard to maintain the database is really badly designed, which is causing severe performance problems also for fixing some errors in PHP code there is a lot of database triggers used which are updating data on SELECT and on INSERT so any testing is a nightmare Basically, any sin of a bad programming you can imagine is there for example it is not only possible to use SQL injections in literally every place but you can log into app if you use a login which doesn't exist and an empty password. The team which created this app is not working on it any more and there is an outsourced team which suggested that there are some problems but was never willing to deal with the elephant in the room partially because they've got a very comfortable contract and partially due to lack of skills (just my opinion). My job was supposed to be fixing some performance problems and extending existing functionality but first thing I was asked to do was a review of the existing code base. I've made my review and it was quite a shock for the management but my conclusions were after some time finally confirmed by other programmers. Management made it clear that it is not possible to start rewriting this app from scratch (which in my opinion should be done). We have to maintain its operable state and at the same time fix performance errors and extend the functionality. My question is, as I don't want just to patch the existing code, how to transform this into properly written app while keeping the existing code working at the same time? My plan is: Unify four existing versions into common code base (fixing only most obvious errors). Redesign db and use triggers to populate it with data (so data will be maintained in two formats at the same time) All new functionality will be written as separate project. Step by step transfer existing functionality into the new project After some time everything will be in the new project Some explanation about #2, right now it is practically impossible to make any updates in existing db any change requires reviewing whole code and making changes in many places. Is such plan feasible at all? Another solution is to walk away and leave the headache to someone else.

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Analysing SQLBits Feedback

    - by jamiet
    Earlier this week I received all the feedback that people offered on my session at SQLBits 7 in York – “SSIS Dataflow Performance Tuning” (the video is available online if you wish to see it). As you may have gathered from previous posts on this blog and my less-SQLy-focused Wordpress blog I am a big fan of collecting and tracking both personal and public data and session feedback lends itself very well to tracking because it is quantitative rather than qualitative; by that I mean attendees are invited to provide marks out of ten rather than (or, in the case of SQLBits, as well as) written comments. The SQLBits feedback is also useful because they use a consistent format – the same questions are asked each time – this means it is particularly easy to to track whether the scores that people give are trending up or down. I suspect that somewhere the SQLBits organisers have a big Analysis Services cube (ok, perhaps its an Excel pivot table) that allows them to analyse these scores per conference, speaker, track etc.… and there’s no reason that we as session speakers cannot do the same thing. To that end I have started to store my feedback in an Excel spreadsheet of my own which in the interests of transparency is available for public viewing (only a web browser required) on SkyDrive at http://cid-550f681dad532637.office.live.com/view.aspx/Public/Misc/Personal%20SQLBits%20Session%20Feedback.xlsx. I have used a pivot table to aggregate all that feedback and here is a screenshot: I am hereby making a public plea to the SQLBits organisers (on the off-chance that they are reading) to please continue to keep the feedback format consistent in the future and I encourage them to publish all of the feedback in an anonymised form. I would also encourage anyone doing conference speaking to track their conference feedback in the same way that I am doing so that you get an insight into whether or not you are improving over time. It is not difficult to setup and maintaining it as you do more sessions takes very little effort. Storing feedback data like this leads me to wider thoughts about well-known conventions and data format standardisation. Let’s imagine a utopia where there were a standard set of questions for capturing session feedback that were leveraged at every conference regardless of subject matter, location or culture; that would give rise to immense cross-conference and cross-discipline analysis – the data analyst in me goes giddy at the thought of it. It is scenarios like this that drive my interest both in data formats such as iCalendar, microformats and RDF, and in emerging movements such as the semantic web and linked data, all things which I have written about in the past. I don’t know whether we will ever reach the stage where every piece of data has structured, descriptive metadata associated with it but I live in hope. @Jamiet

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • Brocken package manager due to incorrect Banshee package

    - by user54974
    Sup, so, I'm not familiar with linux at all so help is much appreciated. I've been trying to boot my pc up from a live CD unsuccessfully. I get to the stage at which there are the options to test without installing or install or so on where I select 'Install Ubuntu.' Here it relays through some fast DOS commands until it reaches 'end trace' and then, eventually, 'Killed.' I have already got a functional 11.10 version installed, could this be a problem? The reason I am attempting a reinstall is because the package system is damaged inside 11.10, a problem I can't seem to solve. If I try to install any new software from within the software centre it tells me that two banshee extensions must be removed. I try to remove these from inside the terminal, using apt-get remove, which results in:** You might want to run apt-get -f install to correct these: The following packages have unmet dependencies. banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). The software centre suggests that I disable all third party repositories and run apt-get install -f I have done so but the package system remains damaged and apt-get install -fattempts to install banshee 2.2.1 but returns: Errors were encountered while processing: /var/cache/apt/archives/banshee_2.2.1-1ubuntu3_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried apt-get update (runs fine) and apt-get upgrade. The upgrade command apt-get upgrade results in: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. banshee-extension-soundmenu : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed E: Unmet dependencies. Try using -f. I seem to be going round and round in circles here! If only I could reinstall successfully. Only proposed updates (oneiric proposed) is not enabled.

    Read the article

  • Investment scheme for a PC game the project

    - by Alex Kamen
    Good day everyone, I am working on a PC game project that has 3 phases planned, micro, macro and mmo versions [if confused, see a brief description at the bottom]. I have found a potential investor for the micro version of the game, but naturally, he requested a detailed plan of how the game will pay back. And the problem is that micro version itself is not supposed to be monetized much, other than some ads and limited in-game currency utilization. The idea is that with this combat demo already at hand, it should be possible to get a really large enough investment (millions of dollars) and use it to pay back the initial small one (thousands of dollars) and take the project into macro phase, which will really make profit. This way, everybody is going to win, provided that I can deliver the end-product. Yet while I am confident of that both the conception of the macro and the real game-play of the micro versions are going to be appealing, I don’t know how to obtain any guarantee of that I will be able to get funded once I have the prototype ready. And without that, I won’t receive the funds for the prototype in the first place! To summarize, my question is: how to figure out my future possibilities of getting funded once I have combat demo out, basically “whom to write to and what”. Ideally, I would like some sort of a preliminary agreement with a game publisher, something that would basically state “If the developer provides the product in time and in quality corresponding to the specifications given, the publisher guarantees to allocate funds for distribution and further development, thereby acquiring the right to X part of all future profits”. Does this sound sane? It’s just that I don’t want to sell all of my rights out straight away by taking a big outside investment while the project is in such early stage. I would appreciate if you would share your thoughts on this kind of scheme, and be sure to ask questions as I am sure I must have forgotten to mention a ton of important things, like the fact that initial funds are going to be spent on outsourcing (living in Siberia is really just great). [here’s a brief outline of what each version will feature] [micro] 1) turn based tactical combat rules 2) character development 3) arena/tournament system [macro] 4) ai-ruled dynamic interactive worlds 5) global map adventuring 6) strategic rpg + god simulator gameplay [mmo] 7) Persistent worlds system 8) Social structures system (“guilds/clans”) 9) god-simulation on the mmo scale P.S. Obviously, these features are incremental, so that mmo version has all 9.

    Read the article

  • The Minimalist's Approach to Content Governance

    - by Kellsey Ruppel
    This week on the blog, we want to focus on the content lifecylce and how important it is to have the tools in place to be able to properly manage all te phases of the content lifecylce. John Brunswick has some great advice when it comes to this topic, so expect to hear a lot from him this week! Originally posted by John Brunswick. Let's be honest - content governance is far from an exciting topic. BUT the potential of a very small intranet team creating and maintaining a platform that provides an organization with relevant, high value information, helping workers to get their jobs done with greater accuracy and in less time is exciting. It is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year.   What can be done to bridge this gap? Over the next few blog entries let's take a pragmatic, minimalistic view of a process that can help any team manage a wealth of unstructured information. Based on an earlier article that I wrote around Portal Governance, I am going to focus on using technology as much as possible to support the governance of content with minimal involvement from users. The only certainty about content production is that business users are not fans of maintaining content. Maintenance is overhead and is a long-term investment thats value will possibly not be realized under the current content creator's watch. To add context to how we will use technical tools in this process, each post will highlight one section of the content lifecycle process as outlined below Content Lifecycle Stages 1. Request - Understand the education, purpose, resource and success criteria for content 2. Create - Determine access and workflow for content 3. Manage - Understand ownership and review cycles 4. Retire - Act on thresholds established during the request stage Within each state we will also elaborate as to 1. Why - why would we entertain doing this? 2. How - the steps that are needed to make it happen 3. Impact - what is the net benefit or loss based on the process Over the course of this week, we will dive deep into the stages and the minimal amount of time, effort and process within each to make some meaningful gains in the improvement of user experience and productivity in their search for information. It might be a stretch to say that we can make content governance exciting, but hopefully it can end up being painless and paying dividends. And if you'd like to hear first hand from a customer that is managing their content lifecycle with Oracle WebCenter, be sure to join us on Wednesday for this webcast "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter"!

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • Attaching new animations onto skeleton via props, a good idea?

    - by Cardin
    I'm thinking of coding a game with an idea of mine. I've coded 2D games before, but I'm new to 3D programming, so I'd like to ask if this idea of mine is feasible or out of my depth. I'm making a game where there are many different characters for the player to choose from (JRPG style). So to save time, I have an idea of creating many different varied characters using a completely naked body mesh and animation skeleton, standardised across all characters. For example, by placing different hair, boots, armor props on the character mesh, new characters can be formed. Kinda like playing dress-up with a barbie doll. I'm thinking this can be done by having a bone on the prop that I can programmically attach to the main mesh. Also, I plan to have some props add new animations to the base skeleton, so equipping some particular props would give it new attack, damage, idle animations. This is because I can't expect the character to have the same swinging animation if he had a big sword or an axe. I think this might be possible if the prop has its own instance of the animation skeleton with just only the new animations, and parenting the base body mesh to this new skeleton. So all the base body mesh has are just the basic animations, other animations come from the props. My concerns are, 1) the props might not attach to the mesh properly and jitter a lot, 2) since prop and body are animated differently, the props and base mesh will cause visual artefacts, like the naked thighs showing through the pants when the character walks, 3) a custom pipeline have to be developed to export skeletons without mesh, and also to attach the base body mesh to a new skeleton during runtime in the game. So my question: are these features considered 'easy' to code? Or am I trying to do something few have ever succeeded with on their own? It feels like all these can be done given enough time and I know I definitely have to do a bit of bone matrix calculations, but I really don't want to drag out the development timeline unnecessarily from coding mathematically intense things or analyzing how to parse 3D export formats. I'm currently only at the Game Design stage, so if these features aren't a good idea, I can simply change the design of the game. (Unrelated to question) I could always, as last resort, have the characters have predetermined outfit and weapon selections so as to animate everything manually.

    Read the article

  • In Technology, Ignorance is NOT Bliss

    - by Tanu Sood
    Author: Debra Lilley, ACE Director, UK Proof I’m not technical -  I’ve just finished a Latin America tour with OTN and a funny thing happened that I want to share with you; because it is quite a good analogy for how many of us use technology today and you know how I love analogies. In Costa Rica we had a really long journey up through the mountains to where our conference was to be. The road was windy and narrow and once it got dark there was no scenery to see, boredom set in. At one stage I looked at my watch to see the time, but in the dark I couldn’t make it out, so I thought I would be clever and use the torch in my smartphone! Even though as soon as I switched on the phone it showed the time, I ignored it and used the torch to read my watch. That’s us when we pay maintenance on software, ask for enhancements, and either chose not to upgrade or as I have seen so many times, upgrade but don’t use the new features. I know there are always other factors not least the upgrade costs themselves but in the later releases of all the Oracle family of applications Oracle have done a lot to make the interoperability of them with Oracle Fusion Middleware more successful and in many cases for the first time. My heritage is Oracle E Business Suite (EBS) and the availability of Oracle Weblogic for EBS is fantastic for an Oracle powered organisation that can move away from supporting multiple flavours of application server. The same release made available  - the no downtime patching that Oracle Database 11g introduced with Edition Based Redefinition. I am not saying you must use these features but you must be aware of what each release of your application brings and make a business based decision as to whether it is for you or not. I like to have a simple spreadsheet of features with no-value, nice-to-have, must-have ratings, but make the spreadsheet cumulative so that when you do upgrade you have all the features listed you previously didn’t take up. That way you can avoid the ‘using your phone to read your watch’ scenario. About the Author: Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts  

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • How to get lookahead symbol when constructing LR(1) NFA for parser?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.292 in the 2nd edition) about how to construct an LR(1) parser, and I am at the stage of building the initial NFA. What I don't understand is how to get/compute a lookahead symbol. Here is the example from the book, the grammar: S -> E E -> E - T E -> T T -> ( E ) T -> n n is terminal. The "weird" transitions for me are is the sequence: 1) S -> . E eof 2) E -> . E - T eof 3) E -> . E - T - 4) E -> E . - T - 5) E -> E - . T - (Note: In the above table, the state numbers are in front and the lookahead symbol is at the end.) What puzzles me is that transition from (4) to (5) means reading - token, right? So how is it that - is still a lookahead symbol and even more important why is it that eof is no longer a lookahead symbol? After all in an input such as n - n eof there is only one - symbol. My naive thinking tells me (5) should be written as: 5) E -> E - . T - eof And another thing -- n is terminal. Why it is not used at all as a lookahead symbol? I mean -- we expect to see - or (, it is ok, but lack of n means we are sure it won't appear in input? Update: after more reading I am only more confused ;-) I.e. what is really a lookahead? Because I see such state as (p.292, 2nd column, 2nd row): E -> E . - T eof Lookahead says eof but the incoming input says -. Isn't it a contradiction? And it is not only in this book.

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • How to refactor a method which breaks "The law of Demeter" principle?

    - by dreza
    I often find myself breaking this principle (not intentially, just through bad design). However recently I've seen a bit of code that I'm not sure of the best approach. I have a number of classes. For simplicity I've taken out the bulk of the classes methods etc public class Paddock { public SoilType Soil { get; private set; } // a whole bunch of other properties around paddock information } public class SoilType { public SoilDrainageType Drainage { get; private set; } // a whole bunch of other properties around soil types } public class SoilDrainageType { // a whole bunch of public properties that expose soil drainage values public double GetProportionOfDrainage(SoilType soil, double blockRatio) { // This method does a number of calculations using public properties // exposed off SoilType as well as the blockRatio value in some conditions } } In the code I have seen in a number of places calls like so paddock.Soil.Drainage.GetProportionOfDrainage(paddock.Soil, paddock.GetBlockRatio()); or within the block object itself in places it's Soil.Drainage.GetProportionOfDrainage(this.Soil, this.GetBlockRatio()); Upon reading this seems to break "The Law of Demeter" in that I'm chaining together these properties to access the method I want. So my thought in order to adjust this was to create public methods on SoilType and Paddock that contains wrappers i.e. on paddock it would be public class Paddock { public double GetProportionOfDrainage() { return Soil.GetProportionOfDrainage(this.GetBlockRatio()); } } on the SoilType it would be public class SoilType { public double GetProportionOfDrainage(double blockRatio) { return Drainage.GetProportionOfDrainage(this, blockRatio); } } so now calls where it used would be simply // used outside of paddock class where we can access instances of Paddock paddock.GetProportionofDrainage() or this.GetProportionOfDrainage(); // if used within Paddock class This seemed like a nice alternative. However now I have a concern over how would I enforce this usage and stop anyone else from writing code such as paddock.Soil.Drainage.GetProportionOfDrainage(paddock.Soil, paddock.GetBlockRatio()); rather than just paddock.GetProportionOfDrainage(); I need the properties to remain public at this stage as they are too ingrained in usage throughout the code block. However I don't really want a mixture of accessing the method on DrainageType directly as that seems to defeat the purpose altogether. What would be the appropiate design approach in this situation? I can provide more information as required to better help in answers. Is my thoughts on refactoring this even appropiate or should is it best to leave it as is and use the property chaining to access the method as and when required?

    Read the article

  • AS3 - At exactly 23 empty alpha channels, images below stop drawing

    - by user46851
    I noticed, while trying to draw large numbers of circles, that occasionally, there would be some kind of visual bug where some circles wouldn't draw properly. Well, I narrowed it down, and have noticed that if there is 23 or more objects with 00 for an alpha value on the same spot, then the objects below don't draw. It appears to be on a pixel-by-pixel basis, since parts of the image still draw. Originally, this problem was noticed with a class that inherited Sprite. It was confirmed to also be a problem with Sprites, and now Bitmaps, too. If anyone can find a lower-level class than Bitmap which doesn't have this problem, please speak up so we can try to find the origin of the problem. I prepared a small test class that demonstrates what I mean. You can change the integer value at line 20 in order to see the three tests I came up with to clearly show the problem. Is there any kind of workaround, or is this just a limit that I have to work with? Has anyone experienced this before? Is it possible I'm doing something wrong, despite the bare-bones implementation? package { import flash.display.Sprite; import flash.events.Event; import flash.display.Bitmap; import flash.display.BitmapData; public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point Test(3); } private function Test(testInt:int):void { if(testInt==1){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var i:int = 0; i < 22; i++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==2){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var j:int = 0; j < 23; j++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==3){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var k:int = 0; k < 22; k++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } var M:Bitmap = new Bitmap(new BitmapData(100, 100, true, 0x00000000)); M.x += 50; M.y += 50; addChild(M); } } } }

    Read the article

  • what would be a good way to implement/render a 2d tiled map for a browser game?

    - by jj_
    I've made this little rpg ruby game I did while learning and now I'd like to make it into a browser game. I've already set up Sinatra framework to serve it, so what I am looking for, before everything else, is a way to represent the game map in browser (location attributes are stored in db). A new map is randomly generated by code for each new game at each game start. For now forget db, and let's say a map (say 100x100 "squares") is stored as a tridimensional array. (x,y, ...) Last "dimension" of array stores who & what is at that map cell: a player, a building, whatever. So all I have to do is render those "squares" or array cells to a 2d tiled map in the browser. The map does not need to refresh or be dynamically fetched as you scroll it, (at least at this stage of development) but, a technology which would allow me to do so in future would be a good reason for choosing it. Things that I thought of: html tables, html5 canvas, some js framework which is designed exactly with this purpose (which I do not know of = please advice). Yes I know about gamequery-js framework, but I've never used it, and I don't know if it's going to slow down everything down to inusability as I'm adding new features (scrolling, ajax). I really don't know of any other alternatives.. maybe there are lighter approaches? Easier or more minimalistic ways ? More targeted js framework which is the right tool for the job? Maybe just some html canvas code, or even simple image maps, or images with absolute positioning will be enough? The thing is I'd like to start simple, and then gradually make it better, so, as I said before, I'd prefer something that will give me room for improvement or is headed toward new web tendencies but which will also give me a bit of gratification in the beginning :) So.. advices are needed! And appreciated! :) Thanks p.s. Flash is excluded because I don't like it.

    Read the article

  • Working with Windows and Unix

    - by user554629
    Beware of new line characters One of the most frequent issues we encounter in Tech Support is the corruption of files that are transferred between Windows and Unix.   The transfer can occur at any stage, but ultimately involves a transfer of a file using an ftp client that is running on Windows;  it could be ftp or filezilla. Windows uses two characters to mark the end of a line in a text file (CR/LF),carriage return, linefeed.   Unix uses a single character (CR). In all situations, it is best to use binary mode transfer for all files, including ascii text files. Common problems: upload a core file from unix to windows using ftp in ascii mode.The file is going to be larger on Windows than Unix.ftp doesn't know if this is a text file with real line-ends, it takes every ascii CR and transmits two ascii characters CR/LF.The core file, tar file, library ... will be corrupted when transferred to Oracle. download a shell script to Windows, and transfer it to Unix using ftpIf the file is edited on Windows, the unix script line-end chars will be doubled.Unix doesn't know how to handle that, and will likely tell you the script is not executable.Why?  The first line of a shell script ( called "sh-bang" ), identifies the command interpreter the unix shell should use for this script.   Common examples:#/bin/sh#/bin/ksh#/bin/bash#/bin/perl#/bin/sh^M    # will not be understood.#/bin/env ksh # special syntax.  Find ksh and run it dos2unix is a common utility found on most unix platforms, that repairs the issue of Windows LineEnd characters in unix script files.   I've written my own flavor of this utility for use in Tech Support and build environments, that is a bit easier to use, and has some nice side-effects. accepts a list of files:   dos2unix *.sh repairs the file in-place.  Doesn't generate a new file you have to name retains the same timestamp;  it is the encoding that changed, not the file content. Here are the versions of dos2unix for each of the environments we work in.They are compressed with gzip, to avoid the ftp ascii transfer trap,and because I am quite limited in the number of files I can upload to this blog. AIX Linux Solaris sparc  Windows 

    Read the article

  • Application workflow

    - by manseuk
    I am in the planning process for a new application, the application will be written in PHP (using the Symfony 2 framework) but I'm not sure how relevant that is. The application will be browser based, although there will eventually be API access for other systems to interact with the data stored within the application, again probably not relavent at this point. The application manages SIM cards for lots of different providers - each SIM card belongs to a single provider but a single customer might have many SIM cards across many providers. The application allows the user to perform actions against the SIM card - for example Activate it, Barr it, Check on its status etc Some of the providers provide an API for doing this - so a single access point with multiple methods eg activateSIM, getStatus, barrSIM etc. The method names differ for each provider and some providers offer methods for extra functions that others don't. Some providers don't have APIs but do offer these methods by sending emails with attachments - the attachments are normally a CSV file that contains the SIM reference and action required - the email is processed by the provider and replied to once the action has been complete. To give you an example - the front end of my application will provide a customer with a list of SIM cards they own and give them access to the actions that are provided by the provider of each specific SIM card - some methods may require extra data which will either be stored in the backend or collected from the user frontend. Once the user has selected their action and added any required data I will handle the process in the backend and provide either instant feedback, in the case of the providers with APIs, or start the process off by sending an email and waiting for its reply before processing it and updating the backend so that next time the user checks the SIM card its status is correct (ie updated by a backend process). My reason for creating this question is because I'm stuck !! I'm confused about how to approach the actual workflow logic. I was thinking about creating a Provider Interface with the most common methods getStatus, activateSIM and barrSIM and then implementing that interface for each provider. So class Provider1 implements Provider - Then use a Factory to create the required class depending on user selected SIM card and invoking the method selected. This would work fine if all providers offered the same methods but they don't - there are a subset which are common but some providers offer extra methods - how can I implement that flexibly ? How can I deal with the processes where the workflow is different - ie some methods require and API call and value returned and some require an email to be sent and the next stage of the process doesn't start until the email reply is recieved ... Please help ! (I hope this is a readable question and that this is the correct place to be asking) Update I guess what I'm trying to avoid is a big if or switch / case statement - some design pattern that gives me a flexible approach to implementing this kind of fluid workflow .. anyone ?

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >