Search Results

Search found 1433 results on 58 pages for 'consistent'.

Page 17/58 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • HTTP response time profiling

    - by Sparsh Gupta
    Hello I have a nginx reverse proxy. The server is close to serving 600-700 requests per second. I have a Munin HTTP load time plugin which is outputting this: http://monitor.wingify.com/munin/visualwebsiteoptimizer.com/lb1.visualwebsiteoptimizer.com-http_loadtime.html Now, the problem is I am seeing some spikes in the graph. Expected response times should always be under 200ms. I am keeping an eye on syslog and messages but I am unable to figure out the actual cause of this. I was wondering if there is any good HTTP response time profiling system which I can install / embed with this nginx server and get a detailed reports / logs on the breakup of time taken by different things and what exactly is the cause of the spikes. The profiling system would also help me understand bottlenecks and how can I further optimize the latency. Most important right now is to investigate the cause of the spikes in the HTTP load time graphs (similar pattern is reported by external monitors - Pingdom) and to fix it to get consistent response times Thanks

    Read the article

  • Ubuntu 11.04 VM shows a black screen in VMware Player

    - by Roel Veldhuizen
    I have a Ubuntu Server 11.04 64 bit VM running on VMware Player 3.1.4 that only shows a black screen. No matter what I try, the screen remains black. The VM has worked the first time. When I reset the machine, it shows the VMware loader and a flickering _ for about a second. Then the screen turns black again. VM settings: Memory: 512MB Processors: 1 HD: 20GB CD: auto detect Floppy: auto detect Network adapter: NAT USB controller: present soundcard: auto detect printer: present display: auto detect I just created a fresh VM and the same happens, so it seems that the problem is consistent.

    Read the article

  • Duplicity Full Backup Lifetime and Efficiency

    - by Tim Lytle
    I'm trying to work up a backup strategy for some clients, and am leaning towards duplicity for remote backup (already use rdiff-backup for internal/on location backups). Is it reasonable to want a full backup every so often? Since duplicity increments forward, each incremental backup is relying on the previous increment, and all are relying heavily on the last full backup. Should that become corrupt, bad things happen. A related question: Does Duplicity test the incremental backups for consistency? Assuming I do want a full backup every so often, how efficiently does duplicity create that full backup? Can/does it check file signatures and copy unchanged data from previous full backups/increments? Basically creating a new 'full' archive transferring new/changed data and merging existing unchanged data? Right now my concern is that running a full backup is needed, but the consistent large bandwidth use of full backups will make this unreasonable for some clients.

    Read the article

  • Oracle 11gR2 exp does not export some tables

    - by Tilo Prütz
    I have an Oracle 11g (11.2.0.1) Database running on Linux (x64). Within the database I have a schema and 33 tables for it. When I log in via sqlplus I can list all the tables via SELECT OBJECT_NAME FROM USER_OBJECTS WHERE OBJECT_TYPE = 'TABLE'; But when I export the Tablespace using exp ... BUFFER=65536 FULL=N COMPRESS=N CONSISTENT=Y TABLESPACES=... FILE=... Then it only exports 24 of the 33 tables. I have tried to export the missing tables via exp ... TABLES=<missing_table> ... But then I get an error: EXP-00011: NPSMIGRO2_CM.DEFAULT_USR_ATTR_VALUES does not exist How can I find out what's wrong here? How can I export all the tables?

    Read the article

  • Ubuntu keyboard detection from bash script

    - by Ryan Brubaker
    Excuse my ignorance of linux OS/hardware issues...I'm just a programmer :) I have an application that calls out to some bash scripts to launch external applications, in this case Firefox. The application runs on a kiosk with touch screen capability. When launching Firefox, I also launch a virtual keyboard application that allows the user to have keyboard input. However, the kiosk also has both PS/2 and USB slots that would allow a user to plug-in a keyboard. If a keyboard were plugged in, it would be nice if I didn't have to launch the virtual keyboard and provide more screen space for the Firefox window. Is there a way for me to detect if a keyboard is plugged in from the bash script? Would it show up in /dev, and if so, would it show up at a consistent location? Would it make a difference if the user used a PS/2 or USB keyboard? Thanks!

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • How much effort is SQL Server 2008 Administration?

    - by Adrian Grigore
    Hi, I am looking for a suitable hosting environment for an ASP.NET MVC application. One of the options I have is renting a Hyper-V server and installing my license of SQL Server 2008 on it. I'm a bit wary of shared hosting since the one I have tried so far did not seem to have very consistent performance. One potential problem is that I would have I do not not know much about SQL Server administration, so I am not sure if this is a good option. I've been running a failover cluster of two linux dedicated servers for over 5 years now and MySQL never gave me any trouble. But that was Linux, and it might be different with a windows system. Is running a halfway efficient MS SQL Server 2008 difficult? Does it require any in-depth administration knowledge? Or perhaps recurring administration effort (such as keeping the server up to date with the latest patches)? Or is it rather an "install and forget" experience similar to MySQL?

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. Any clue as to what went wrong?

    Read the article

  • Why are 32-bit application pools more efficient in IIS? [closed]

    - by mhenry1384
    I've been running load tests with two different ASP.NET web applications in IIS. The tests are run with 5,10,25, and 250 user agents. Tested on a box with 8 GB RAM, Windows 7 Ultimate x64. The same box running both IIS and the load test project. I did many runs, and the data is very consistent. For every load, I see a lower "Avg. Page Time (sec)" and a lower "Avg. Response Time (sec)" if I have "Enable 32-bit Applications" set to True in the Application Pools. The difference gets more pronounced the higher the load. At very high loads, the web applications start to throw errors (503) if the application pools are 64-bit, but they can can keep up if set to 32-bit. Why are 32-bit app pools so much more efficient? Why isn't the default for application pools 32-bit?

    Read the article

  • How do I solve MSSQL 2008 install error, "The MOF compiler could not connect with the WMI server"?

    - by nbolton
    Possibly related to: SQL Server 2008 Install fails error reading etwcls.mof After manually removing MSSQL 2008 from my system (uninstall failed to remove two instances), I receive the following error when trying to re-install: The MOF compiler could not connect with the WMI server. This is either because of a semantic error such as an incompatibility with the existing WMI repository or an actual error such as the failure of the WMI server to start. It seems that mofcomp is failing with one of the .mof files, but I'm not sure which, or why. Digging through the connect article gave some indications, but no solution. I've run winmgmt /salvagerepository, which returns "WMI repository is consistent". Currently, I'm unable to install MSSQL 2008. Please help!

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • Generate TFTP Content on the fly?

    - by andyortlieb
    I know this isn't the purpose of TFTP, but I'm working in an environment where a lot of different types of devices pull provisioning info from a TFTP server. What I'm developing is a provisioning system that tracks and maintains device configurations, and I would like to have the requested files generated on the fly, much like you could do with any web application. Yes some of these devices can support HTTP for provisioning, but not all of them do, and we want things to be consistent. Are there any TFTP daemons that can provide something analogous to CGI?

    Read the article

  • Oracle on NFS vmdk beats native NFS!?

    - by fletch00
    Hi, my colleagues are pursuing this with Netapp and Oracle - but I thought I'd post here on the off chance someone else has seen this We have a RedHat 5 VM (fully up2date) running Oracle 11i with data disks mounted via the VM's linux kernel NFS using Oracle's recommended mount options and the performance is very inconsistent (Querys that should take < 2 seconds sometimes take 60 seconds) Funny thing is we can run the same queries perfectly consistently < 2 seconds on a VMDK residing on SAME NetApp NFS datastore! Makes me wish Oracle and NetApp collaborated as closely as VMware and NetApp did on the Virtual Storage Console we used to perfectly set the NFS options and keep them in compliance... We have tried a few Linux NFS options others have posted and not seen improvement so far. We are now creating VMDK's for the VM to replace the Linux NFS mounted and workaround the issue as our developers need consistent performance ASAP.

    Read the article

  • Microsoft Entourage/Exchange Server problem: all objects disappeared from server - still in some form on the client

    - by splattne
    One of our employees works with Entourage on his MacBook Pro (OSX 10.6) accessing Exchange Server 2007. Last Friday morning, I think while working over a VPN, Entourage (I think it was Entourage) deleted all his objects (mail, calendar, contacts) on the server and while creating a lot of strange folders (starting with underscores) on the client. The local data seems to be there, but not in a consistent form. Since the user's mailbox is rather big, I suspect, that there was some kind of "move" operation which did not complete. I tried to export the data, but the export stops because of a corrupted object. Is there a tool or another way to export or retrieve the local data? Edit - FYI: we solved the problem getting his data from the previous night's backup.

    Read the article

  • Ubuntu Server Edition (Jaunty) x64 Segmentation faults in PHP mysql package

    - by Deeksy
    I've been running Jaunty with Apache2, PHP & MySQL running drupal websites as well as python 2.6 and trac on the same server. I'm getting quite a few segmentation faults and suhosin warnings on my drupal websites which don't seem to be related to the amount of RAM the server has (3GB) as the trac site is running happily without issues. The issue seems to be related to PHP accessing mysql and I'm getting suhosin warnings. Has anyone else seen this problem? Any ideas on how to fix it? Funnily enough, it's not a consistent error, as restarts tend to fix the issue temporarily.

    Read the article

  • How do I solve MSSQL 2008 install error, "The MOF compiler could not connect with the WMI server"?

    - by nbolton
    Possibly related to: SQL Server 2008 Install fails error reading etwcls.mof After manually removing MSSQL 2008 from my system (uninstall failed to remove two instances), I receive the following error when trying to re-install: The MOF compiler could not connect with the WMI server. This is either because of a semantic error such as an incompatibility with the existing WMI repository or an actual error such as the failure of the WMI server to start. It seems that mofcomp is failing with one of the .mof files, but I'm not sure which, or why. Digging through the connect article gave some indications, but no solution. I've run winmgmt /salvagerepository, which returns "WMI repository is consistent". Currently, I'm unable to install MSSQL 2008. Please help!

    Read the article

  • Ultrium 3 tape drive shoe-shining, 3Mb/s: and it's not the cable

    - by mowsala
    I have a HP 960 Ultrium 3 tape drive. Since I got it, (second hand, £90) I've been experiencing shoe-shining. Writing with tar in Linux, I average about 3Mb/s write speed. I've tried replacing both the SCSI card and the cable now, both of which made no difference at all. A curiuos observation I have made is that the write rate is not consistent. Sometimes it will write for over a minute without shoeshining, but more often, just a few seconds. I've also tried several tapes, different source drives, and even writing from Windows Backup, to no avail.

    Read the article

  • Can a Shadow Copy of SQL 2000 databases files be used as a restore?

    - by Keith Sirmons
    Howdy, I have a SQL 2000 instance (version 8.00.760) that is on a drive that gets regular shadow copies. Can a shadow copy be used to restore the database? It seems possible to stop the SQL service, restore the Data folder from the shadow copy (includes msdb, master, model, temp, and the user databases, then restart the service. Would the files be in a crash consistent case in the worst case? If so, when restarting the service wouldn't it recover as if the power were pulled from the server? Thank you, Keith

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Why does iTunes make 2 copies of my music when adding to library?

    - by NoCatharsis
    I set iTunes to "Keep iTunes Media folder organized" and to "Copy files to iTunes Media folder when adding to library" because I prefer to keep my music consolidated, organized, and consistent. However, when I have MP3s that are external to iTunes, then try to add via File Add Folder to Libary, iTunes creates 2 copies of the file in the iTunes folder - one with the original song name and another with the original song name followed by the number 1. Here is what I thought would happen, and I hope is possible: 1) Click File Add Folder to Library 2) Select folder external to iTunes 3) Click OK 4) iTunes creates a clean new folder in the iTunes Music directory with exactly 1 of each file 5) Only 1 of each song is shown within iTunes Is this too much to ask? I am not an iTunes fan at all after 2 years dealing with the poor programming of this application. I hope someone can help me find the faith...

    Read the article

  • Make Firefox left-click open in current tab, and middle-click open in new tab

    - by endolith
    I'm sick of the inconsistent behavior of clicking on links in Firefox. I want control of where they open up. If I'm done with a page and want to replace it with the link I am clicking, I left-click. If there are things I want to look at in the future, but I'm not done with this page yet, I'll middle-click. This normally works, but there are exceptions. If the website designer uses target="_blank", my left-click is overridden and the link opens in a new tab/window. If the links are javascript, a middle-click rarely works. I get an (Untitled) tab with some javascript as the URL. etc. How do I fix these things and get consistent clicking on links?

    Read the article

  • Microsoft Entourage/Exchange Server problem: all objects disappeared from server - still in some for

    - by splattne
    One of our employees works with Entourage on his MacBook Pro (OSX 10.6) accessing Exchange Server 2007. Last Friday morning, I think while working over a VPN, Entourage (I think it was Entourage) deleted all his objects (mail, calendar, contacts) on the server and while creating a lot of strange folders (starting with underscores) on the client. The local data seems to be there, but not in a consistent form. Since the user's mailbox is rather big, I suspect, that there was some kind of "move" operation which did not complete. I tried to export the data, but the export stops because of a corrupted object. Is there a tool or another way to export or retrieve the local data? Edit - FYI: we solved the problem getting his data from the previous night's backup.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >