Search Results

Search found 20993 results on 840 pages for 'automatic storage management asm diskgroup redundancy metadata'.

Page 21/840 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Temporary file storage - script for webserver

    - by Chris
    I'm looking for a script (preferably php) which I can use for to temporarily exchange files. I had such a script before (it was written in flash), but - damn - just can't find it anymore. Here the features I'm looking for: I have a logon, which allows me to create "upload spaces". for these "upload spaces" I define a time how long the space will be available for (until the files are deleted), and who can access it - in the means of typing in an email, and that user gets a link with the "online space", user ID and password. the other user then clicks on the link and uploads the file, and I get (preferably) an email should there be any file changes I get an email before the content gets deleted Now, 3 additional things: - it should be open- source - run on linux (preferably lamp) - no, I dont wanna use dropbox or similar :p Thanks in advance everybody, Cheers Chris

    Read the article

  • Image storage social network (Host plan)

    - by Samir
    I'm wondering what the best way is to host images on a social network site. Let's say that I expect my social network to reach 500.000 users in 2 years time. That would mean that if every user uploaded about 100 images and every image is 1 MB that I will have to need: 500.000 * 100 * 1 MB = 50.000.000 MB which means 50 terabytes. I'm not sure how I can best setup my hosting plan in order to have a solid bases to store my images and eventually store video files as well. Which hosting plan would you recommend me to start with and how can I enhance the plan?

    Read the article

  • media storage social network (Host plan)

    - by Samir
    I'm wondering what the best way is to host media for a social network site. Let's say that I expect my social network to reach 500.000 users in 2 years time. I'm not sure how I can best setup my hosting plan in order to have a solid bases to store media files. Which hosting plan would you recommend me to start with and how can I enhance the plan?

    Read the article

  • Page allocation failures on iSCSI storage

    - by Dave
    We have a CentOS 6.3 iscsi server (16GB RAM) running on Infiniband bus (ipoib). When the load is high I can see multiple errors: Sep 3 23:22:20 stor4 kernel: tgtd: page allocation failure. order:2, mode:0x20 Sep 3 23:22:20 stor4 kernel: Pid: 3637, comm: tgtd Not tainted 2.6.32 #1 Sep 3 23:22:20 stor4 kernel: Call Trace: Sep 3 23:22:20 stor4 kernel: [] ? __alloc_pages_nodemask+0x77f/0x940 Sep 3 23:22:20 stor4 kernel: [] ? kmem_getpages+0x62/0x170 Sep 3 23:22:20 stor4 kernel: [] ? fallback_alloc+0x1ba/0x270 Sep 3 23:22:20 stor4 kernel: [] ? cache_grow+0x2cf/0x320 Sep 3 23:22:20 stor4 kernel: [] ? ____cache_alloc_node+0x99/0x160 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __kmalloc+0x189/0x220 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __pskb_pull_tail+0x2aa/0x360 Sep 3 23:22:20 stor4 kernel: [] ? tcp_init_tso_segs+0x37/0x50 Sep 3 23:22:20 stor4 kernel: [] ? dev_queue_xmit+0x4bb/0x6f0 Sep 3 23:22:20 stor4 kernel: [] ? neigh_connected_output+0xbd/0x100 Sep 3 23:22:20 stor4 kernel: [] ? ip_finish_output+0x237/0x310 Sep 3 23:22:20 stor4 kernel: [] ? ip_output+0xb8/0xc0 Sep 3 23:22:20 stor4 kernel: [] ? __ip_local_out+0x9f/0xb0 Sep 3 23:22:20 stor4 kernel: [] ? ip_local_out+0x25/0x30 Sep 3 23:22:20 stor4 kernel: [] ? ip_queue_xmit+0x190/0x420 Sep 3 23:22:20 stor4 kernel: [] ? sock_aio_write+0x167/0x180 Sep 3 23:22:20 stor4 kernel: [] ? tcp_transmit_skb+0x3fe/0x7b0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_write_xmit+0x1fb/0xa20 Sep 3 23:22:20 stor4 kernel: [] ? __tcp_push_pending_frames+0x30/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_push_pending_frames+0x33/0x40 Sep 3 23:22:20 stor4 kernel: [] ? do_tcp_setsockopt+0x3d6/0x480 Sep 3 23:22:20 stor4 kernel: [] ? tcp_setsockopt+0x2a/0x30 Sep 3 23:22:20 stor4 kernel: [] ? sock_common_setsockopt+0x14/0x20 Sep 3 23:22:20 stor4 kernel: [] ? sys_setsockopt+0x7f/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? system_call_fastpath+0x16/0x1b Sep 3 23:22:20 stor4 kernel: Mem-Info: Sep 3 23:22:20 stor4 kernel: Node 0 DMA per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 23 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 181 Sep 3 23:22:20 stor4 kernel: Node 0 Normal per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 171 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 29 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: active_anon:1875 inactive_anon:2473 isolated_anon:0 Sep 3 23:22:20 stor4 kernel: active_file:1243637 inactive_file:2505055 isolated_file:0 Sep 3 23:22:20 stor4 kernel: unevictable:0 dirty:268338 writeback:0 unstable:0 Sep 3 23:22:20 stor4 kernel: free:86050 slab_reclaimable:132377 slab_unreclaimable:23744 Sep 3 23:22:20 stor4 kernel: mapped:1293 shmem:222 pagetables:720 bounce:0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA free:15732kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15332kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 2172 16060 16060 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 free:107544kB min:18268kB low:22832kB high:27400kB active_anon:468kB inactive_anon:2364kB active_file:566208kB inactive_file:976112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2224900kB mlocked:0kB dirty:96816kB writeback:0kB mapped:908kB shmem:12kB slab_reclaimable:176940kB slab_unreclaimable:968kB kernel_stack:64kB pagetables:192kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 13887 13887 Sep 3 23:22:20 stor4 kernel: Node 0 Normal free:220924kB min:116772kB low:145964kB high:175156kB active_anon:7032kB inactive_anon:7528kB active_file:4408340kB inactive_file:9044108kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14220800kB mlocked:0kB dirty:976536kB writeback:0kB mapped:4264kB shmem:876kB slab_reclaimable:352568kB slab_unreclaimable:94008kB kernel_stack:2048kB pagetables:2688kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 0 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA: 1*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15732kB Sep 3 23:22:20 stor4 kernel: Node 0 DMA32: 16305*4kB 4381*8kB 353*16kB 8*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 107900kB Sep 3 23:22:20 stor4 kernel: Node 0 Normal: 14548*4kB 14808*8kB 2420*16kB 31*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 220784kB Sep 3 23:22:20 stor4 kernel: 3748822 total pagecache pages Sep 3 23:22:20 stor4 kernel: 0 pages in swap cache Sep 3 23:22:20 stor4 kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 3 23:22:20 stor4 kernel: Free swap = 975864kB Sep 3 23:22:20 stor4 kernel: Total swap = 975864kB Sep 3 23:22:20 stor4 kernel: 4194303 pages RAM Sep 3 23:22:20 stor4 kernel: 126915 pages reserved Sep 3 23:22:20 stor4 kernel: 3753534 pages shared Sep 3 23:22:20 stor4 kernel: 213500 pages non-shared TCP stack and VM config: net.core.rmem_max = 83886080 net.core.wmem_max = 83886080 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.ipv4.tcp_rmem = 40960 1048560 4194304 net.ipv4.tcp_wmem = 40960 196608 4194304 net.ipv4.tcp_mem = 16388608 16388608 16388608 vm.min_free_kbytes=135168 Additional tweaks: /sbin/blockdev --setra 16384 /dev/sdb echo 2048 /sys/block/sdb/queue/nr_requests Where might the problem be? Thank you.

    Read the article

  • What are some solutions for cloud storage [closed]

    - by jaja
    I don't want to move my HDDs much, as it would definitely result in one of them one day dropping and fail... also, they are not portable (there are many), and inconvenient to use with a laptop. So, this is what I merely need:- 1) ability to access files as if the HDDs are directly connected to the computer. Which means I don't have to "transfer" to use (more like stream), and ease of access. 2) low cost.

    Read the article

  • Creating bootable Fedora USB with persistent storage

    - by dooffas
    I am attempting to burn the full Fedora 19 x86_64 DVD iso to a USB drive and have a separate partition on it for a kickstart file / other media that will be installed in the kickstart process. With the Ubuntu server 12 iso, you can simply dd the iso to the usb drive: dd if=/path/to/iso of=/dev/sdb Once the iso has been burnt, open gparted and create a ext2 parition in the allocated space. However, this does not seem to work with the Fedora ISO. When loading the USB drive in gparted I get a warning and an error: Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes. Error: The partition's data region doesn't occupy the entire partition. Ignoring both of these errors allows gparted to load the usb drive, however it shows a blank drive with no partition table. Has anyone come across this before? From what I have found, it may have something to do with the fact that Fedora use isohybrid.

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • What are the best ways to store Graphs in persistent storage

    - by nicoslepicos
    I am wondering what the best ways to store graphs in persistent storage are, for later analysis, search, clustering, etc. I see neo4j being an option, I am curious if there are also other graph databases available. Does anyone have any insights into how larger social networks store their graph based data (or other sites that require the storage of graph like models, e.g. RDF). What about options like Cassandra, or MySQL?

    Read the article

  • Efficient storage in C#.net App

    - by Tommy
    I'm looking for the fastest, least memory consuming, stand alone storage method available for large amounts of data for my C# app. My initial thoughts: Sql: no. not stand alone XML in flat file: no. takes too long to parse large amounts of data Other Options? Basically what i'm looking for, is a way that i can load with my applications load, keep all the data in my app, and when the data in my app changes just update the storage location.

    Read the article

  • android internal phone storage

    - by John
    how can you retrieve yours phone internal storage from an app? I found memoryinfo, but it seems that returns information on how much memory your currently running tasks. I am trying to get my app to retrieve how much internal phone storage is available.

    Read the article

  • Efficient storage in .Net App

    - by Tommy
    I'm looking for the fastest, least memory consuming, stand alone storage method available for large amounts of data for my C# app. My initial thoughts: Sql: no. not stand alone XML in flat file: no. takes too long to parse large amounts of data Other Options? Basically what i'm looking for, is a way that i can load with my applications load, keep all the data in my app, and when the data in my app changes just update the storage location.

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • SQL SERVER – Another lesser known feature of SQL Server Management Studio 2012 – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. In one of my previous guest blog on SQL Authority, I wrote about “Additional Connection Parameter” tab of login screen in SQL Server Management Studio (a.k.a. SSMS). On the similar lines, this blog is going to show little less known new feature of login main screen (“Connect to Server”) of SSMS 2012. You might have seen below screen countless times and you might wonder what is there is blog about in this simple screen. Well, continue reading and you would get the answer. Many times, DBA have to login to production server from non-regular machine, may be a developer’s workstation. Once you login to SQL, do your work and close the management studio. Do you know that your server name is saved in management studio? Of course, very useful feature because you may not like to type server name/IP address every time. Whatever servers you have connected, it would be stored by management studio. But sometime, it’s annoying! What you would do if you want SQL Server Management Studio to forget “all” the servers listed in drop down of Server name? To do that, you need to know how and where it’s stored. You can use one of my favorite tool from sysinternals called Process Monitor (also known as ProcMon) and easily figure out that this is stored in a file under your windows user profile. Below is the file in SQL 2008 R2 Management Studio. %appdata%\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin For SQL Server 2012, here is what we can see in ProcMon So, the path is %appdata%\Microsoft\Microsoft SQL Server\110\Tools\Shell\SqlStudio.bin So far, you might wonder, where is the new feature? I have been asked by many users to delete entries from SSMS “Connect to Server” server name list. Well, unofficially, you can delete the file directly which we found via ProcMon. Note that delete file to get rid of server list is not officially supported by Microsoft. Better way to achieve this is provided in SSMS 2012. To delete the servers from the list, highlight the name we want to delete (via keyboard or mouse) and then press delete key via keyboard. We can’t be multi-select and has to be done one by one. We can delete as many entries we want. I have delete few from first screenshot taken and here is the modified version. This is not available in SQL 2008 R2 and its previous version. This came from feedback given to SQL Server Product group. Hope you have learned something new today! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to refactor while keeping accuracy and redundancy?

    - by jluzwick
    Before I ask this question I will preface it with our environment. My team consists of 3 programmers working on multiple different projects. Due to this, our use of testing is mostly limited to very general black box testing. Take the following assumptions also: Unit Tests will eventually be written but I'm under strict orders to refactor first Ignore common Test-Driven Development techniques when given this environment, my time is limited. I understand that if this were done correctly, our team would actually save money in the long-term by building Unit-Tests before hand. I'm about to refactor a fairly large portion of the code that is critical. While I believe my code will accurately work when done and after our black box testing, I realize that there will be new data that the new code might not be able to handle. What I wanted to know is how to keep old code that functions 98% of the time so that we can call those subroutines in case the new code doesn't work properly. Right now I'm thinking of separating the old code in a separate class file and adding a variable to our config that will tell the program which code to use. Is there a better way to handle this? NOTE: We do use revision control and we have archived builds so the client could always revert to a previous build, but I would like to see if there is a decent way of doing this besides reverting. I want this so they can use the other new functionality delivered in the new build. Edit: While I agree I will need to write Unit Tests for this, I don't believe I will capture everything with them. I'm looking for ways to easily be able to revert to the old, functional code should anything happen. While I know this is a poor practice, I'm planning on removing this code after our team can guarantee that the new code works to the same standards as the old.

    Read the article

  • Why shibboleth IdP idp-metadata.xml recommends 8443 for SOAP?

    - by toma
    After the install.sh of 2.4.0 Shibboleth Identity Server, the idp-metadata.xml file is created. Why is that? Is not enough secure to use the standard HTTPS/443 port? <ArtifactResolutionService Binding="urn:oasis:names:tc:SAML:1.0:bindings:SOAP-binding" Location="https://idp.example.com:8443/idp/profile/SAML1/SOAP/ArtifactResolution" index="1"/> <ArtifactResolutionService Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP" Location="https://idp.example.com:8443/idp/profile/SAML2/SOAP/ArtifactResolution" index="2"/> <SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP" Location="https://idp.example.com:8443/idp/profile/SAML2/SOAP/SLO" /> <AttributeService Binding="urn:oasis:names:tc:SAML:1.0:bindings:SOAP-binding" Location="https://idp.example.com:8443/idp/profile/SAML1/SOAP/AttributeQuery"/> <AttributeService Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP" Location="https://idp.example.com:8443/idp/profile/SAML2/SOAP/AttributeQuery"/> Thanks, Tamas

    Read the article

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

  • Evidence for automatic browsing - Log file analysis

    - by Nilani Algiriyage
    I'm analyzing web server logs both in Apache and IIS log formats. I want to find the evidence for automatic browsing, like web robots, spiders, bots, etc. I used python robot-detection 0.2.8 for detecting robots in my log files, but I know there may be other robots (automatic programs) which have traversed through the web site but robot-detection can not identify. So I want to ask: Are there any specific clues that can be found in log files that human users do not leave but automated software would? Do they follow a specific navigation pattern? I saw some requests for favicon.ico - does this implicate that it is a automatic browsing?. I found this article and this question with some valuable points.

    Read the article

  • Why can't the IT industry deliver large, faultless projects quickly as in other industries?

    - by MainMa
    After watching National Geographic's MegaStructures series, I was surprised how fast large projects are completed. Once the preliminary work (design, specifications, etc.) is done on paper, the realization itself of huge projects take just a few years or sometimes a few months. For example, Airbus A380 "formally launched on Dec. 19, 2000", and "in the Early March, 2005", the aircraft was already tested. The same goes for huge oil tankers, skyscrapers, etc. Comparing this to the delays in software industry, I can't help wondering why most IT projects are so slow, or more precisely, why they cannot be as fast and faultless, at the same scale, given enough people? Projects such as the Airbus A380 present both: Major unforeseen risks: while this is not the first aircraft built, it still pushes the limits if the technology and things which worked well for smaller airliners may not work for the larger one due to physical constraints; in the same way, new technologies are used which were not used yet, because for example they were not available in 1969 when Boeing 747 was done. Risks related to human resources and management in general: people quitting in the middle of the project, inability to reach a person because she's on vacation, ordinary human errors, etc. With those risks, people still achieve projects like those large airliners in a very short period of time, and despite the delivery delays, those projects are still hugely successful and of a high quality. When it comes to software development, the projects are hardly as large and complicated as an airliner (both technically and in terms of management), and have slightly less unforeseen risks from the real world. Still, most IT projects are slow and late, and adding more developers to the project is not a solution (going from a team of ten developer to two thousand will sometimes allow to deliver the project faster, sometimes not, and sometimes will only harm the project and increase the risk of not finishing it at all). Those which are still delivered may often contain a lot of bugs, requiring consecutive service packs and regular updates (imagine "installing updates" on every Airbus A380 twice per week to patch the bugs in the original product and prevent the aircraft from crashing). How can such differences be explained? Is it due exclusively to the fact that software development industry is too young to be able to manage thousands of people on a single project in order to deliver large scale, nearly faultless products very fast?

    Read the article

  • Windows Server Configuration Management Best Practices

    - by Anton Gogolev
    Chef/Pupper/Ansible are cool and all, but they are second-class citizens on Windows at best. We have a bunch of "snowflake" (one of a kind) machines (baremetal and virtual) that nobody really know what's going on with. What I want is to start establishing basic configuration management for said servers, starting from installing Windows, installing and enabling various Roles and Features, setting up Services, Shares, Users and deploying webapps. PowerShell DSC looks promising, but it's not yet here and appears to be over-engineered, Puppet and the like are again not first-class. There's a bunch of tooks and TLAs like Windows ADK, DISM, OCSetup, etc. and it seems to me that the "Configuration Management" story on Windows is not precisely rainbows and unicorns. What I want is a Puppet/Chef-like, lightweight tool (no System Center Configuration Management, please) which would allow us to "version-control our server infrastructure" and bring all the benefits of CM. So, where do I look for the tool that does this kind of thing?

    Read the article

  • How to reduce MDX code redundancy in SQL Server Analysis Services (SSAS)

    To query an Analysis Services cube, MDX is used as the query language. In most business settings, one would find a set of queries that are common across a number of user query requirements. To cater to this, even with a modest size IT team, there is a good chance that the same queries are developed redundantly either within a SSAS MDX script or repetitively in an ad-hoc manner in client applications. In this tip we would look at how to reuse queries without redeveloping them over and over.

    Read the article

  • How to batch rename files based on file header/metadata in Windows?

    - by Infraded
    I have a directory full of randomly named files of different types, all with no file extensions. Most are images, with some videos, and some plaintext. I've used one of the Windows versions of file to confirm the files can all be identified by their headers/metadata, but would like to automate the naming as there are roughly 2400 files. I don't care so much about the filename as much as just having the appropriate extension for it's type. Is anyone aware of a program or script that can do this?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >