Search Results

Search found 502 results on 21 pages for 'disaster'.

Page 13/21 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How do I reattach to Ubuntu Server's 'do-release-upgrade' process?

    - by Alex Leach
    I accidentally pressed Ctrl+C during Ubuntu Server's do-release-upgrade process. I'd dropped to a shell to compare a .conf file in /etc/. When I pressed Ctrl-C, it asked whether I wanted to try to reattach to the upgrade process, but it failed to do so. So I quit, and now there's a hanging dpkg process which is holding onto the apt lock. This is a virtualised server with no GUI frontend... Is it possible to recover the upgrade process, or do I have to kill the dpkg process and start again? UPDATE:- AFAICT, there was no way to reattach to the upgrade process. However, it wasn't a disaster at all. I killed the hanging dpkg process, and then ran dpkg --configure -a. This walks you through reconfiguring all packages already installed on the system, tidying up any problems whilst it does so. After that, I used aptitude to upgrade the remaining packages, which had already been downloaded, but hadn't been installed or configured.

    Read the article

  • Demoted domain controller and now IIS has permissions issues

    - by tladuke
    I have a machine that was a domain controller and everything else, includin an IIS site. I built new 2 new domain controllers and transferred the FSMO roles and waited a day and then demoted the original domain controller. Now the IIS site says : HTTP Error 401.2 - Unauthorized You are not authorized to view this page due to invalid authentication headers. I have a call in with the web app vendor, but maybe someone can guess what I need to fix now. I haven't looked at IIS since this was installed and am pretty lost. I thought about restoring the machine from a backup, but I think that would be an Active Directory disaster, right? The server is Windows 2008 (not R2). The new DCs are 2008 R2

    Read the article

  • Run Wave Trusted Drive Manager from a bootable CD, recover crashed enrypted SSD?

    - by TigerInCanada
    Is there a way to run Wave Trusted Drive Manager from a live-cd to access a non-bootable SSD with Full Disk Encyption hard disk? http://www.wave.com/products/tdm.asp The crashed disk is a Samsung SSD PB22-JS3, 128Gb. Is has bad blocks at 128-block intervals. If the SSD password could be unset, is sending the unit for disaster recovery possible? What might cause a nearly new SSD to crash in this way, and what is the probability of it happening again? We have other units in service an I can do without every laptop disk in the company crashing...

    Read the article

  • Iphone 3GS to laptop Syncing

    - by Chris
    I use my Iphone calender to store all my appointments and customer information etc rather than my Outlook calender on my laptop most of the time. To the best of my knowledge I have never lost any info before, but after syncing the two devices today my calender is now empty on both. I had thought that syncing would mean taking the information on both devices, amalgamating it and populating both devices with the same info? Two questions: a) Is there anyway of retrieving the lost information? (or am I sunk?) b) Is there any way of averting such disaster in the future? Whilst I suspect that "Read the manual - Idiot!" would be the correct response, any help really is greatly appreciated!

    Read the article

  • PCI Compliance Book Suggestion

    - by Joel Weise
    I am always looking for good books on security, compliance and of course, PCI.  Here is one I think you will find very useful. "PCI Compliance, Third Edition: Understand and Implement Effective PCI Data Security Standard Compliance" by Branden Williams and Anton Chuvakin.  [Fair disclosure - Branden and I work together on the Information Systems Security Association Journal's editorial board.]   The primary reason I like this book is that the authors take a holistic architectural approach to PCI compliance and that to me is the most safe and sane way to approach PCI.  Using such an architectural approach to PCI is, in my humble opinion, the underlying intent of PCI.  Don't create a checklist of the PCI DSS and then map a solution to each.  That is a recipe for disaster.  Instead, look at how the different components and their configurations work together in a synergistic fashion.  In short, create a security architecture and governance framework (the ISO 27000 series is a good place to start) that begins with an evaluation of the requirements laid down in the PCI DSS, as well as your other applicable compliance, business and technical requirements.  By developing an integrated security architecture you should be able to not only address current requirements, but also be in a position to quickly address future ones as well.

    Read the article

  • Uninstall SQL Server 2005 Express after Demoting the DC

    - by Walter Aman
    A Windows Server 2003 SP2 hosting a now orphan installation of SQL 2005 Workgroup was pressed into service as a DC in a disaster recovery scenario. It has since been demoted. The server also hosts legacy apps for which we lack reinstallation resources; thus our desire to preserve it as close to intact as possible while removing the orphaned roles. All efforts to remove SQL 2005 thru Control Panel and ARPWrapper /remove fail with error 29528. Should I abandon this and leave the orphan SQL dormant, or is it reasonable to remove it post-demote?

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • does windows incremental backup include system state backup?

    - by Kossel
    I'm managing my very small office server with windows server 2008. since I have only one server, and the user group is really small. I made the first hdd into 2 partitions. one (C:) for windows and Active directory, another (D:) for tomcat and database. I'm doing incremental back C: and D: daily to hdd2 (E:) using windows server backup. is it enough to let me do fully restore my server in case of disaster? I ask this because I have read there is also a system state backup, and I also have to do that periodically in order to get AD back? isn't it with incremental/full backup I can do full bare-metal recovery?

    Read the article

  • Dell PowerEdge 6850 Degraded HDD

    - by Matt
    Good Morning, We have a dell power edge 6850 with a degraded drive in the RAID array. I have never had to recover such an issue, so any help or advice would be welcome. Basically it hasn't affected the server at an operating system level, but has slowed down performance, I have a replacement drive in hand but as this is our main database server I want to proceed with extreme caution. My options as I see them are - Can I just hot swap the degraded drive with the new one and the data will automatically re-sync and we are all back to normal presumably this is dependant on the current raid configuration? reading various comments on-line I may need to re-configure the RAID array and re-build the broken drive? This screams disaster to me with the main worry being that I wipe any other data. Option 1 would of course make my day. Thanks in advance

    Read the article

  • Clouds Aroud the World

    - by user12608550
    At the NIST Cloud Computing Workshop this week; representatives from Canada, China, and Japan presented on their cloud computing efforts. Some interesting points made: Canada: Building "Service Canada" cloud for all citizen services, but raised the issue of data location...cloud data must be within Canada border, so they will not focus on public clouds where they don't know or can't control data location. Japan: In response to the massive destruction of the Great East Japan Earthquake, Japan is building nation-wide cloud services to support disaster relief, data recovery, and support for rebuilding new communities. US Ambassador Philip Verveer discussed the need for international cooperation and standards development to enable interoperability of cloud services, keeping in mind cultural and political differences. Additionally, an industry panel reported on cloud standards development, including some actual interoperability testing at http://www.cloudplugfest.org. Much of the first two days of the workshop covered progress and action plans around the 10 High-Priority Requirements to Further USG Agency Cloud Computing Adoption. Thursday's sessions will cover the work of the various NIST Cloud Computing Working Groups on Reference Architecture and Taxonomy Standards Acceleration to Jumpstart the Adoption of Cloud Computing (SAJACC) Cloud Security Standards Roadmap Business Use Cases (see Working Groups of NIST Cloud Computing )

    Read the article

  • Caption Competition 2: The Captioning

    - by Simple-Talk Editorial Team
    Caption competition time again! What’s going on here then?   Some suggestions to get your comedy juices flowing: “So long chaps, hope you can continue to cope without a written disaster plan!” – said the only DBA “These shoes cost a lot of money, I’m not muddying them in the SAN Admin waters!” “Down Devs, down. Stay away from my database.” It had taken a lot of time and work, but finally Trevor’s out of office setup had the sense of occasion he needed. “Could you just add one small feature?” shouted upper management, hurtling by. Add your suggestion in the comments for a chance to win $50 in Amazon vouchers. Anything computer-related will help, but feel free to suggest anything. The competition runs until 5 p.m. (BST) on Friday the 16th of May.

    Read the article

  • Is Sql Azure useful without windows azure?

    - by KallDrexx
    I am currently doing some research to get some preliminary IT cost projections for a project, and I was looking at Azure. Since this is a startup, I do not want to deal with the IT operations myself and instead am looking at having it all professionally hosted. I am looking at azure due to the SLA assurances, already in place disaster recovery operations, and the reliability. I'm playing with some numbers, and I am wondering if hosting my database on Sql Azure is an option, while hosting the actual webpage on another host until I need the frontend scalability of Azure. Is this actually feasible or will the latency in requests between the web host and azure be too much and I would be better off hosting both on the same service?

    Read the article

  • What is the best way to exploit multicores when making multithread games?

    - by Keeper
    Many people suggest to write a program, and then start optimizing it. But I think that when it's coming to multithreading with multicore, a little think ahead is required. I've read about using threads, and experienced it myself during some courses at the university (still a student). The big question is simple, but a bit abstract: What thread related steps in game design do I need to take, before implementation? Now trying to be more specific. Let's say, as an example, that I'm making a small board game (like Monopoly) that I want to be multithreaded. My goal Is that this multithreaded game will exploit the best of the multicore system, lets say 4-6 cores (like in i7 processors). My answer to this question at the moment is, one thread for each of these four basic components: GUI User Input / Output AI (computer rival) Other game related calculations (like shortest path from A to B, or level up status change) I'm not an expert (yet!), and I'm sure there are better answers out there. Any suggestion, answer, different approach will be helpful. Some thoughts: Maybe splitting the main database is a good way.. (or total disaster.. )

    Read the article

  • Twinview broken on upgrade to ubuntu 10.10

    - by mapkyca
    I have been on 9.10 for over a year on the grounds that if it ain't broke, don't fix it. However, I had a spare weekend and figured it was probably about time... I performed an upgrade to 10.4, and everything seemed to proceed smoothly, so I took the plunge and went for 10.10. Disaster. My twinview Nvidia display which had been working perfectly is now broken. On boot everything seems fine, but when X starts and the second monitor springs into life the primary winks out and switches off - almost as if its been put into an unsupported display mode. The system seems to think there's a second monitor - the nvidia logo is split across the two screens, but it can't seem to start. Things I've tried: Swapping the monitors (one is older than the other, and its definitely the port not the actual monitor) Rolling back to an old Xorg conf from prior to the upgrade Installing a non-beta driver direct from Nvidia (this seems to start both monitors but then apparently stops boot and causes the second display to 'wink'. Twinview seems non-functional, both displays are mirrors) Disabling EDID Disabling twinview, logging in and attempting to use the Nvidia config to re-detect the monitors (second monitor is falsely detected and won't go higher than 1024x768. Selecting 'apply' causes one screen to go blank and the other to display garbage) googling for about 5 hours looking for similar problems, none of the offered solutions seemed to work I'm at a loss, and it is looking very much like I'm going to have to go through a time consuming reinstall to downgrade back to the working 10.4. Any thoughts?

    Read the article

  • Fun programming or something else?

    - by gion_13
    I've recently heard about android's isUserAGoat method and I didn't know what to think. At first I laughed my brains out, than I was embarrassed for my lack of professionalism and tried to look into it and see if it makes any normal sense. As it turns out it is a joke (as stated here) and it appears that other languages/apis have these sort of easter eggs implemented in their core. While I personally like them and feel they can be a fresh breath sometimes, I think that they also can be both frustrating and confusing (and you begin to ask yourself : "can users be goats?" or "I get it! "goat" is slang for.... wait.."). My question is are there any other examples of these kind of programming jokes and what are their intends? Should they be considered harmless or not (how do programmers feel about it) ? Do they reach their goal (if any other than to laugh) ? Where do you draw a line between a good joke and a disaster? (what if the method was called isUserStupid?)

    Read the article

  • What is OpenSVC?

    - by sh-beta
    OpenSVC was just ported to the FreeBSD platform. The little blurb in that announcement intrigued me so I went to the OpenSVC website and found this: OpenSVC is a 'service' manager, as in clustered service manager, designed for real-world heterogeneous datacenters and large-scale operations orchestrator (disaster recovery, for example). Services are collections of resources (virtual machine, ip, disk groups, filesystems, file synchronizations, and application launchers). Services can be started, stopped and queried for status, providing a consistent command set for wildly different service integration types. Service configurations, status and logs are pushed to a central database coupled to a web front-end (collector). Services can be administered using the stand-alone GPLv2 software stack deployed on the nodes (nodeware), or through the web-front end. Plus some UML-type graphics. Which is all neat, but I still don't understand: what does it do? Am I just being dense? What's the use case for this system?

    Read the article

  • Determine compression ratio for Windows compressed drive

    - by munrobasher
    Is there a Windows 7 native way to display the overall compression ratio on a Windows compressed drive? As part of our disaster recovery process, we're copying some key system folders onto 2TB external hard drive, encrypted using TrueCrypt and copied using robocopy. The drive is compressed and I'd like to see what kind of compression ratio we're getting and whether it's actually worth the performance overhead. I know that TreeSize can possibly do this (as mentioned in another post) but want a OS native way if possible. Thanks, Rob.

    Read the article

  • Offsite Backup

    - by Grant Fritchey
    There was a recent weather event in the United States that seriously impacted our power grid and our physical well being. Lots of businesses found that they couldn’t get to their building or that their building was gone. Many of them got to do a full test of their disaster recovery processes. A big part of DR is having the ability to get yourself back online in a different location. Now, most of us are not going to be paying for multiple sites, but, we need the ability to move to one if needed. The best thing you can to start to set this up is have an off-site backup. Want an easy way to automate that? I mean, yeah, you can go to tape or to a portable drive (much more likely these days) and then carry that home, but we’ve all got access to offsite storage these days, SkyDrive, DropBox, S3, etc. How about just backing up to there? I agree. Great idea. That’s why Red Gate is setting up some methods around it. Want to take part in the early access program? Go here and try it out.

    Read the article

  • Latest (5 June 14) Updates t0 10.04 Causing Multiple Problems

    - by user291780
    Apologies, the questions are very short, but the bkg isn't. I rec'd a routine notification from the update mgr a few days ago (I believe, June 5th). I took a look and there was lots of linux stuff, headers, etc., nothing obviously unusual. I'd rec'd and updated w/a more extensive pkg set, kernel and all a few weeks ago, no problem. On June 6, I pushed the upgrade button on the June 5th batch, nothing usual, needed a reboot, which I did after a full power down, it came up fine. Gedit worked, the calculator worked, started up firefox, it came up, selected the BookMarks menu, and blam, it hesitated and then greyed out, when I tried to close it, got the "process not responding" msg. Undaunted, I tried to fire up Google Chrome....nothing on the screen or process bar. I fired up the system monitor and indeed there were some "sleeping" chrome processes "running". Powered down several times, but the same problems persist. Similar but worse story when I tried to fire up one of my virtual machines, VirtualBox came up fine, but when I tried to start one of my virtual machines I got a progress popup that I'd never seen before which showed that we were making no progress past 20%. Uninstalled Oracle VirtualBox, reinstalled the latest and greatest, same result. Also, unable to logout, or shutdown once the virtual machine exhibited this behavior. Powered down manually, end of story. Never saw such a bad result after an update. I'm running Ubuntu 10.04 LTS (Lucid Lynx) as I have been for a number of years. Please don't reply with why don't you run some other version of Ubuntu, that doesn't answer the questions below. Questions: Will their be a subsequent update that fixes this, and if so, when? If not, is there a way for me to get back to where I was before this disaster?

    Read the article

  • How to find dom0 name from hosted domU

    - by svigan
    I'm actually testing RHEL 5.3 with Xen between two servers in order to have a disaster recovery solution. So I'm playing with moving my domU from one dom0 server to the other server. Unfortunatly when somebody else move the domU I don't have any clue where my domain is hosted. I'm wondering where I can't find my dom0 name with inside domU. I'm looking for something like the gzonename command on a solaris sparse zone. I check inside /proc/xen but I don't see anything special except the dom0 kernel release. Does anybody know a wait to find this ?

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • WiFi problems on several Ubuntu installations

    - by Rickyfresh
    Okay this is the first time I have ever had to ask a question as usually the Ubuntu community have answered everything already but on this occasion there are many people asking for the answer but not one good solution has become available so far so someone please help or I will have to install Windows on my sons and my girlfriends PCs and that would be a disaster as I am trying to help convince people to move from Windows. I installed 12.04 on three computers on the same day. Dell Inspiron (Works Perfect) Toshiba Satellite Home built Desktop The Dell works perfect but the other two either keep losing connection to the wireless Internet and even when they are connected they stop connecting to web sites, for some reason it searches Google fine but will not connect to web sites when a link is clicked. So far people have recommended in other forums: Removing network manager and installing wicd (didn't solve it) Changing the MTU in the wireless settings (didn't solve it) All sorts of messing about with Firefox settings (this doesn't solve it and even if it did this would leave most average PC users scratching their heads and wishing they had stuck to windows) The problem exists on two very different machines and different wireless cards so I doubt its a driver or hardware issue, also many other Ubuntu users are having the same problem with a vast array of different machines and wireless cards. Can someone please give a good solution to this as its going to turn a lot of people away from Ubuntu if they cannot get this sorted. I would give some PC specs but the two machines are vastly different and the other people complaining of this problem also have very different systems all showing the same problem.

    Read the article

  • my partition has lost

    - by Mahmoud20070
    The problem begins here: http://askubuntu.com/questions/118224/i-have-big-problem-with-mount-partition. I tried to solve the problem by myself, but I messed up. I opened gparted magic and opened my hard disk and chose Create Partition Table and the disaster was done. All the partitions on my hard disk (500 GB) have been removed. What can I do? I tried to use PTDD Partition Table Doctor 3.5, but it didn't help. This drive had important data that I need to get back.

    Read the article

  • Redesigning my website has destroyed my SEO

    - by user20721
    Unfortunately i read an article on how to avoid destroying your websites SEO from a redesign article AFTER its was too late! Here is the article (http://www.searchenginejournal.com/how-to-avoid-seo-disaster-during-a-website-redesign/42824/) On 20 November 12 completely redesigned our www.retromodern.com.au . We get ALL our customers from our website as we do not have a shop. Since that dreaded day a month ago the phone pretty much stopped, basically no emails, Google rankings down and Google analytics have halved by 50%. Yesterday i did some research into as as i had no idea that a re-design of a website could have such a damaging effect - yes i am a novice and use a WYSIWYG type web builder. There are lots of info on how to AVOID this from happening BUT what do i do as i have already made the mistake? Yesterday i reloaded my OLD site with my new pages in the background hoping this would be a start. I really have no idea of how to get out of this mess. Please please help. Thanks in Advance. Monique

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >