Search Results

Search found 30312 results on 1213 pages for 'tactial test team'.

Page 195/1213 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Jasmine BDD vs Integration Tests

    - by lfender6445
    Lets say I need to write a test for the front end. A user visits buysomething.com, saves something to their wishlist, and a saved item count is updated. DOM gets manipulated. In my heart I feel this is better suited as an integration test - but my team is currently using jasmine to load fixtures and test such interactions. This leads to extremely brittle tests as they are reliant on a static fixture instead of the actual markup. Are we misusing jasmine here?

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • How should I manage "reverting" a branch done with bookmarks in mercurial?

    - by Earlz
    I have an open source project on bitbucket. Recently, I've been working on an experimental branch which I (for whatever reason) didn't make an actual branch for. Instead what I did was use bookmarks. So I made two bookmarks at the same revision test --the new code I worked on that should now be abandoned(due to an experiment failure) main -- the stable old code that works I worked in test. I also pushed from test to my server, which ended up switching the tip tag to the new unstable code, when I really would've rather it stayed at main. I "switched" back to the main bookmark by doing a hg update main and then committing an insignificant change. So, I pushed this with hg push -f and now my source control is "correct" on the server. I know that there should be a cleaner way to "switch" branches. What should I do in the future for this kind of operation?

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • LiveCD does not work on my desktop

    - by Boris
    I've installed Oneiric on my laptop without any issue using the LiveCD downloaded here (from the French Ubuntu community server). But on my desktop, weird things happen: During the 1st try booting with the LiveCD on my desktop, my 2 year old child just hit the keyboard, and after several error messages the desktop loaded and I've been able to test Oneiric. But I wanted to redo a boot before installing Oneiric to avoid mistakes. So during the 2nd time I tried to boot with the LiveCD, I couldn't access to the point where I can choose to test or install. Before trying a 3rd time, I've "cleaned the system" from System Parameter System. But after that I'm still not able to access to the point where I can choose to test or install. Now it stops all the time on a black screen. I do not understand why several boot attempts with same CD have several results. So I wonder if the state of my current installation 11.04 can affect re-booting with my CD 11.10 ?

    Read the article

  • Installation of Ubuntu 12.04 LTS is Crashing from Live CD

    - by Daniel Evans
    Hardware: Dell Inspiron 1545 Steps are as follows: Insert Ubuntu 12.04 disc Boot computer Output is as follows error: unexpectedly disconnected from boot status daemon Generating locales... en_US.UTF-8... done Generation complete. ***MEMORY-ERROR***: glib-compile-schemas[569]: GSlice: assertion failed: aligned_memory == (gpointer) addr Aborted pwconv: failed to change the mode of /etc/passwd- to 0600 ***MEMORY-ERROR***: [996]: GSlice: assertion failed: aligned_memory == (gpointer) addr ***MEMORY-ERROR***: glib-compile-scehmas[1034]: GSlice: assertion failed: aligned_memory == gpointer) addr Aborted /usr/lib/python2.7/dist-packages/LanguageSelector/LocaleInfo.py:256: UserWarning: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory warnings.warn(msg.args[0].encode('UTF-8')) Using CD-ROM mount point /cdrom ... etc etc... End up at a prompt line ubuntu@ubuntu:~$ The computer's self tests have given the following three errors (so far): Error Code OFOO:O65D Msg DISK-DST Self-test read error SATA Disk S/N=.... Confidence Test Fail Error Code 0F00:1332 Msg: DISK- Block 418047942: Interrupt Request (IRQ) Error Code: 0142 HD0 self test unsuccessful Status 79

    Read the article

  • Bayes Misclassification error and plot : pattern recognition [closed]

    - by user1214586
    Below is a Matlab code for Bayes classifier which classifies arbitrary numbers into their classes. training = [3;5;17;19;24;27;31;38;45;48;52;56;66;69;73;78;84;88]; target_class = [0;0;10;10;20;20;30;30;40;40;50;50;60;60;70;70;80;80]; test = [1:2:90]'; class = classify(test,training, target_class, 'diaglinear'); % Naive Bayes classifier [test class] If someone could provide code snippets for calculating the Bayes error for misclassification and accuracy.Also, is it possible to plot a scatter plot and histogram indicating the number of data points belonging to different classes? Thank you.

    Read the article

  • Link form correct, or, punishable by search engines?

    - by w0rldart
    I have the following dilemma with the links of a wordpress blog that I work with, I don't know if the way it creates the link to the images is ok or not so good. For example: Article URL: http://test.com/prima-de-riesgo/ Image URL belonging to the article: http://test.com/prima-de-riesgo/europa/ So what I'm worried about is the repeating "prima-de-riesgo" part. Should I, or shouldn't I? UPDATE Wow, I can't believe that you took test.com as for the real domain, hehe! Article URL: http://queaprendemoshoy.com/prima-de-riesgo-y-otras-graficas-interesantes-del-ano-2011-deuda-publica-pib-vs-empleo-y-precio-del-oro/ Image URL belonging to the article: http://queaprendemoshoy.com/prima-de-riesgo-y-otras-graficas-interesantes-del-ano-2011-deuda-publica-pib-vs-empleo-y-precio-del-oro/deuda-publica-eurozona/ So, as I mentioned... I'm worried that prima-de-riesgo-y-otras-graficas-interesantes-del-ano-2011-deuda-publica-pib-vs-empleo-y-precio-del-oro , the common factor for the article url and image url, can be considerate as duplicate content or anything that could be punishable by search engines

    Read the article

  • TOMORROW! UPK for Testing Webinar

    - by Karen Rihs
    UPK Webinar:  UPK for Testing September 13, 2012 10 am pacific / 1 pm eastern As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Join us for an OAUG-sponsored event on Sept 13th to hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing. Click Here to Register

    Read the article

  • Benchmark for website speed optimization

    - by gowri
    I working on website speed optimization. I mostly used 3 tools for analyzing speed of optimization. Speed analyzing Tools: Google pagespeed tool Yslow Firefox extenstion Web Page Performance Test I am measuring performance using above tool and benchmark result as below like before and after. Before optimization : Google PageSpeed Insights score : 53/100 Web Page Performance Test : 55/100 (First View : 10.710s, Repeat view : 6.387s ) Yahoo Overall performance score : 68 Stage 1 After optimization : Google PageSpeed Insights score : 88/100 Web Page Performance Test : 88/100 (First View : 6.733s, Repeat view : 1.908s ) Yahoo Overall performance score : 80 My question is ? Am i doing correct way ? What is the best way of benchmark for speed optimization ? Is there any standard ? Is there any much better tool for analyzing speed ?

    Read the article

  • Testing loses its effectiveness if all programmers don't use them

    - by Jeff O
    Let's assume you are convinced that the extra time spent unit testing has merit and improves production. Does that still hold up when everyone working on the same code doesn't use them? This question makes me wonder if fixing tests that everyone doesn't use is a waste of time. If you correct a test so the new code will pass, you're assuming the new code is correct. The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well. There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?

    Read the article

  • Deep in the Heart of Texas

    - by Applications User Experience
    Author: Erika Webb, Manager, Fusion Applications UX User Assistance When I was first working in the usability field, the only way I could consider conducting a usability study was to bring a potential user to a lab environment where I could show them whatever I was interested in learning more about and ask them questions. While I hate to reveal just how long I have been working in this field, let's just say that pads of paper and a stopwatch were key tools for any test I conducted. Over the years, I have worked in simple labs with basic video taping equipment and not much else, and I have worked in corporate environments with sophisticated usability labs and state-of-the-art equipment. Years ago, we conducted all usability studies at the location of the user. If we wanted to see if there were any differences between users in New York, Chicago, and Los Angeles, we went to those places to run the test. A lab environment is very useful for many test situations. However, there has always been a debate in the usability field about whether bringing someone into a lab environment, however friendly we make it, somehow intrinsically changes the behavior of the user as compared to having them work in their own environment, at their own desk, and on their own computer. We developed systems to create a portable usability lab, so that we could go to the users that we needed to test.  Do lab environments change user behavior patterns? Then 9/11 hit. You may not remember, but no planes flew for weeks afterwards. Companies all over the world couldn't fly-in employees for meetings. Suddenly, traveling to the location of the users had an additional difficulty. The company I was working for at the time had usability specialists stuck in New York for days before they could finally rent a car and drive home to Colorado. This changed the world pretty suddenly, and technology jumped on the change. Companies offering Internet meeting tools were strugglinguntil no one could travel. The Internet boomed with collaboration tools that enabled people to work together wherever they happened to be. This change in technology has made a huge difference in my world. We use collaborative tools to bring our product concepts and ideas to the user across the Internet. As a global company, we benefit from having users from all over the world inform our designs. We now run usability studies with users all over the world in a single day, a feat we couldn't have accomplished 10 years ago by plane! Other technology companies have started to do more of this type of usability testing, since the tools have improved so dramatically. Plus, in our busy world, it's not always easy to find users who can take the time away from their jobs to come to our labs. reaching users where it is convenient for them greatly improves the odds that people do participate. I manage a team of usability specialists who live in India and California, whlie I live in Colorado. We have wonderful labs that we bring users into to show them our products. But very often, we run our studies remotely. We used to take the lab to the users now we use the labs, but we let the users stay where they are. We gain users who might not have been able to leave work to come to our labs, and they get to use the system they are familiar with. And we gain users nearly anywhere that we can set up an Internet connection, as long as the users have a phone, a broadband connection, and a compatible Web browser (with no pop-up blockers). After we recruit participants in a traditional manner, we send them an invitation to participate through the use of a telephone conference call and Web conferencing tool. At Oracle, we use Oracle Web Conference part of Oracle Collaboration Suite, which enables us to give the user control of the mouse, while we present a prototype or wireframe pictures. We can record the sessions over the Web and phone conference. We send the users instructions, plus tips to ensure that we won't have problems sharing screens. In some cases, when time is tight, we even run a five-minute "test session" with users a day in advance to be sure that we can connect. Prior to the test, we send users a participant script that contains information about the study, including any questionnaires. This is exactly the same script we give to participants who come to the labs. We ask users to print this before the beginning of the session. We generally run these studies by having a usability engineer in our usability labs, so that we can record the session as though the user were in the lab with us. Roughly 80% of our application software usability testing at Oracle is performed using remote methods. The probability of getting a   remote test participant decreases the higher up the person is in the target organization. We have a methodology checklist available to help our usability engineers work through the remote processes.

    Read the article

  • A tale of two dev accounts

    - by TechTwaddle
    Note: I am currently in the process of relocating my blog from http://www.geekswithblogs.net/techtwaddle to my new address at http://www.techtwaddle.net I suggest you point your feed readers to the new address as I slowly transition to my new shared-hosted, ad-free wordpress blog :)   You probably remember my rant from a while back about my windows mobile developer account having problems with the new AppHub, well, there have been few developments and I thought I should share it with you. First up, the issue isn’t fixed yet. I still cannot login to AppHub using my windows mobile 6.x developer account and can’t view details of my Minesweeper app. Who knows how many copies its sold. I had numerous exchanges with Microsoft’s support team on the AppHub forums and via email as well (support ticket), but somehow we never managed to get to the root of it. In fact, the support team itself grew so tired of the problem that they suggested I create a new dev account. I grew impatient, and it was really frustrating to have an app ready for submission but not being able to do anything with it. Eventually, the frustration had to show somewhere, and it was on this forum thread Prabhu Kumar in reply to Nick Nick, I feel for you and totally understand the frustration. Since day one I have been getting the XBOX profile linking error, We encountered an issue connecting your App Hub account with your Xbox Live Profile. Please visit Xbox.com and update your contact information. After you have updated your contact information, please return to the App Hub (https://users.create.msdn.com/Register) to continue. I have an app published on the Windows Mobile 6.x marketplace since Aug, now I can't view the details of this app. I completed work on my WP7 application 1.5 months ago and the first version is ready for submission to marketplace, only if I can login. You can imagine how frustrating all this can be, the issue has taken far too long to be fixed, this has drained all my motivation. I have exchanged numerous mails with Microsoft support team on this issue, and from the looks of it they really are trying their best, unfortunately, their best is not good enough for some of us. During the first week of December I was told that there would be an update happening to AppHub around mid of December. I was hoping that the issue would be fixed but it wasn't. After the update the only change I notice is that the xbox.com link on the error page now takes me to the correct link. Previously, this link used to take me to the 404 page you mentioned above. Out of desperation, I am now considering creating another developer account on AppHub with a new live id, even this I am not 100% sure will work. I asked the support team when the next update to AppHub was planned and got this reply, "We do not have  release date to announce for the next App Hub update at this time. In regards to the login issue you are experiencing at this point the only solution would be to create a new account with a different live ID but make sure to go to xbox.com before hand to get all the information in order on that side." I know it's an extra $99, and not that I can't afford it but it doesn't feel right and I shouldn't have to be doing it in the first place. I have lost all hope of this issue being resolved. I went ahead and created a new dev account, the id verification was in progress when Shaun Taulbee of Microsoft, who has been really helpful in the forums, replied saying, If you find it necessary to pay again to create a new account due to a Microsoft problem, send in a support request asking for a refund and we'll review it (and likely approve it given the circumstances). The thought of refund made me happy, but I had my doubts. So once my second account was verified by Geotrust I applied for a refund through the developer dashboard, by creating a support ticket. Couple of days later I got an email from Microsoft saying that the refund had been approved! yay! Few days and the refund showed up on my bill, Well, thank you Microsoft, it means a lot. I am glad it’s over now. The new account works flawlessly. I would still like to get my first account working again and look at my app numbers for Win Mo 6.x, and probably transfer the credits to the new account somehow, but I’ll save it for another day. If you’ve had similar problems with the AppHub, and had to create a new account to submit your app, I suggest you contact the support team and get your dollars refunded!

    Read the article

  • Death March

    - by Nick Harrison
    It is a horrible sight to watch a project fail. There are few things as bad. Watching a project fail regardless of the reason is almost like sitting in a room with a "Dementor" from Harry Potter. It will literally suck all of the life and joy out of the room. Nearly every project that I have seen fail has failed because of political challenges or management challenges. Sometimes there are technical challenges that bring a project to its knees, but usually projects fail for less technical reasons. Here a few observations about projects failing for political reasons. Both the client and the consultants have to be committed to seeing the project succeed. Put simply, you cannot solve a problem when the primary stake holders do not truly want it solved. This could come from a consultant being more interested in extended the engagement. It could come from a client being afraid of what will happen to them once the problem is solved. It could come from disenfranchised stake holders. Sometimes a project is beset on all sides. When you find yourself working on a project that has this kind of threat, do all that you can to constrain the disruptive influences of the bad apples. If their influence cannot be constrained, you truly have no choice but to move on to a new project. Tough choices have to be made to make a project successful. These choices will affect everyone involved in the project. These choices may involve users not getting a change request through that they want. Developers may not get to use the tools that they want. Everyone may have to put in more hours that they originally planned. Steps may be skipped. Compromises will be made, but if everyone stays committed to the end goal, you can still be successful. If individuals start feeling disgruntled or resentful of the compromises reached, the project can easily be derailed. When everyone is not working towards a common goal, it is like driving with one foot on the break and one foot on the accelerator. Not only will you not get to where you are planning, you will also damage the car and possibly the passengers as well.   It is important to always keep the end result in mind. Regardless of the development methodology being followed, the end goal is not comprehensive documentation. In all cases, it is working software. Comprehensive documentation is nice but useless if the software doesn't work.   You can never get so distracted by the next goal that you fail to meet the current goal. Most projects are ultimately marathons. This means that the pace must be sustainable. Regardless of the temptations, you cannot burn the team alive. Processes will fail. Technology will get outdated. Requirements will change, but your people will adapt and learn and grow. If everyone on the team from the most senior analyst to the most junior recruit trusts and respects each other, there is no challenge that they cannot overcome. When everyone involved faces challenges with the attitude "This is my project and I will not let it fail" "You are my teammate and I will not let you fail", you will in fact not fail. When you find a team that embraces this attitude, protect it at all cost. Edward Yourdon wrote a book called Death March. In it, he included a graph for categorizing Death March project types based on the Happiness of the Team and the Chances of Success.   Chances are we have all worked on Death March projects. We will all most likely work on more Death March projects in the future. To a certain extent, they seem to be inevitable, but they should never be suicide or ugly. Ideally, they can all be "Mission Impossible" where everyone works hard, has fun, and knows that there is good chance that they will succeed. If you are ever lucky enough to work on such a project, you will know that sense of pride that comes from the eventual success. You will recognize a profound bond with the team that you worked with. Chances are it will change your life or at least your outlook on life. If you have not already read this book, get a copy and study it closely. It will help you survive and make the most out of your next Death March project.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Release build vs nightly build

    - by Tuomas Hietanen
    Hi! A typical solution is to have a CI (Continuous Integration) build running on a build server: It will analyze the source code, make build (in debug) and run tests, measure test coverage, etc. Now, another build type usually known is "Nightly build": do slow stuff like create code documents, make a setup package, deploy to test environment, and run automatic (smoke or acceptance) tests against the test environment, etc. Now, the question: Is it better to have a third separate "Release build" as release build? Or do "Nightly build" in release mode and use it as a release? What are you using in your company? (The release build should also add some kind of tag to source control of potential product version.)

    Read the article

  • Sarg Daily Report not generated

    - by Suleman
    Ubuntu 12.04 LTS Squid 3 Sarg 2.3.2 following is my crontab configuration # SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin m h dom mon dow user command 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # this does not generate daily html report.

    Read the article

  • How to encrypt php folder under /var/www?

    - by sirchaos
    I need to encrypt the folder /var/www/test. The folder contains PHP files. The goal it to prevent any user to read the php content AND if the HD is mounted on another computer, the /var/www/test should be encrypted AND if computer booted up without any user logged I would like anyone to be able to access data in /var/www/tests. What is the correct approach for this? I've tried "ecryptfs-setup-private" as advised in How to encrypt /var/www? yet it didn't work for me. I've might missed something - I've tested the folders while booting with ubuntu 12.04 installation disk and mounted the drive, than I was able to access /var/www/test content.. yet this is what I want to prevent. The gnome-encfs isn't the way to go since its decryption happens when users logs on to the system & I would like the system to be working after power failure etc' without any one logged in. Please advice.

    Read the article

  • Agile Executives

    - by Robert May
    Over the years, I have experienced many different styles of software development. In the early days, most of the development was Waterfall development. In the last few years, I’ve become an advocate of Scrum. As I talked about last month, many people have misconceptions about what Scrum really is. The reason why we do Scrum at Veracity is because of the difference it makes in the life of the team doing Scrum. Software is for people, and happy motivated people will build better software. However, not all executives understand Scrum and how to get the information from development teams that use Scrum. I think that these executives need a support system for managing Agile teams. Historical Software Management When Henry Ford pioneered the assembly line, I doubt he realized the impact he’d have on Management through the ages. Historically, management was about managing the process of building things. The people were just cogs in that process. Like all cogs, they were replaceable. Unfortunately, most of the software industry followed this same style of management. Many of today’s senior managers learned how to manage companies before software was a significant influence on how the company did business. Software development is a very creative process, but too many managers have treated it like an assembly line. Idea’s go in, working software comes out, and we just have to figure out how to make sure that the ideas going in are perfect, then the software will be perfect. Lean Manufacturing In the manufacturing industry, Lean manufacturing has revolutionized Henry Ford’s assembly line. Derived from the Toyota process, Lean places emphasis on always providing value for the customer. Anything the customer wouldn’t be willing to pay for is wasteful. Agile is based on similar principles. We’re building software for people, and anything that isn’t useful to them doesn’t add value. Waterfall development would have teams build reams and reams of documentation about how the software should work. Agile development dispenses with this work because excessive documentation doesn’t add value. Instead, teams focus on building documentation only when it truly adds value to the customer. Many other Agile principals are similar. Playing Catch-up Just like in the manufacturing industry, many managers in the software industry have yet to understand the value of the principles of Lean and Agile. They think they can wrap the uncertainties of software development up in a nice little package and then just execute, usually followed by failure. They spend a great deal of time and money trying to exactly predict the future. That expenditure of time and money doesn’t add value to the customer. Managers that understand that Agile know that there is a better way. They will instead focus on the priorities of the near term in detail, and leave the future to take care of itself. They have very detailed two week plans with less detailed quarterly plans. These plans are guided by a general corporate strategy that doesn’t focus on the exact implementation details. These managers also think in smaller features rather than large functionality. This adds a great deal of value to customers, since the features that matter most are the ones that the team focuses on in the near term and then are able to deliver to the customers that are paying for them. Agile managers also realize that stale software is very costly. They know that keeping the technology in their software current is much less expensive and risky than large rewrites that occur infrequently and schedule time in each release for refactoring of the existing software. Agile Executives Even though Agile is a better way, I’ve still seen failures using the Agile process. While some of these failures can be attributed to the team, most of them are caused by managers, not the team. Managers fail to understand what Agile is, how it works, and how to get the information that they need to make good business decisions. I think this is a shame. I’m very pleased that Veracity understands this problem and is trying to do something about it. Veracity is a key sponsor of Agile Executives. In fact, Galen is this year’s acting president for Agile Executives. The purpose of Agile Executives is to help managers better manage Agile teams and see better success. Agile Executives is trying to build a community of executives that range from managers interested in Agile to managers that have successfully adopted Agile. Together, these managers can form a community of support and ideas that will help make Agile teams more successful. Helping Your Team You can help too! Talk with your manager and get them involved in Agile Executives. Help Veracity build the community. If your manager understands Agile better, he’ll understand how to help his teams, which will result in software that adds more value for customers. If you have any questions about how you can be involved, please let me know. Technorati Tags: Agile,Agile Executives

    Read the article

  • Creating symlink for Postgres

    - by Edwin
    As a developer, I often ssh right into my local database, just to test my application before pushing my code. However, I find that every time I want to access Postgres, I have to type in postgres@ubuntu:~$ /usr/local/pgsql/bin/psql test whereas on my work machine, all I have to do is type postgres@ubuntu:~$ psql --dbname=test --username=user I tried creating a symlink, which was successful, but whenever I try connecting to it through this shortcut, I get the following error message: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? How do I get this to work? In case it makes any difference, I'm using a self-compiled version of the 9.1.x series.

    Read the article

  • Unit testing - getting started

    - by higgenkreuz
    I am just getting started with unit testing but I am not sure if I really understand the point of it all. I read tutorials and books on it all, but I just have two quick questions: I thought the purpose of unit testing is to test code we actually wrote. However, to me it seems that in order to just be able to run the test, we have to alter the original code, at which point we are not really testing the code we wrote but rather the code we wrote for testing. Most of our codes rely on external sources. Upon refactoring our code however, even it would break the original code, our tests still would run just fine, since the external sources are just muck-ups inside our test cases. Doesn't it defeat the purpose of unit testing? Sorry if I sound dumb here, but I thought someone could enlighten me a bit. Thanks in advance.

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • Why isn't cron running my script?

    - by Jingqiang Zhang
    Now I want to use Backup and Whenever gem to automatic backup my database. When I connect the server by ssh as an added user to run backup perform -t my_backup,it works well.But the cron file: 0 22 * * * /bin/bash -l -c 'backup perform -t my_backup' can't run at 22:00. When I use cat /etc/crontab check the cron's config file,it is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # The /bin/bash and /bin/sh are different.What's the reason?How to do?

    Read the article

  • TFS 2008 to TFS 2010 moves and some issues

    - by Enrique Lima
    There have been many things going on this year around TFS.  Most of them had to do with migrations (I don’t call them upgrades for the most part since it involved new hardware and such).  Many were implementations using the Conchango SfTS template (now EMC). But there were others that were CMMI or Agile 4.0. Everything would move just fine, no issues.  That was until you attempted to run Test Case Management or run the last configuration steps for Lab Management. There is an error that states a project is not ready to run or integrate with Test or Lab Management.  And while there was some documentation on how to adjust and update the Agile WITs to work with it, there was still some disconnect to making it work with CMMI. Now there is a great post on how to run the “fix” from end to end. Check the post here:  TFS 2010: Enable Test Case Management for upgraded Team Projects

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >