Search Results

Search found 1556 results on 63 pages for 'backups'.

Page 45/63 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • ASP.NET hosting: better, faster, cheaper

    - by Fabrice Marguerie
    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I use Arvixe for all my other websites, the major ones being SharpToolbox.com, JavaToolbox.com, AxToolbox.com, Proagora.com, LinqInAction.net, ClairDeBulle.com, and madgeek.com.Moving all my websites wasn't a walk in the park, but it was well worth it. Let's consider what I get with Arvixe:Unlimited diskspaceUnlimited data transferUnlimited domainsDedicated application poolsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL 5 databases.NET 1.1, 2, 3.5 and 4Full trustIIS 7Daily backups A powerful and easy to use control panelAnd more!All of this for $8 per month. If you don't need all of the above features, you can even get an offer as cheap as $5 per month.You can even get better rates if you use coupon codes, such as TOPHOST (30% discount) or MVCHOSTING (20% discount).All in all, I paid only $134 for two years for a great hosting service!Maybe it's time for you to move too?Disclaimer: the links to OrcsWeb and Arvixe are affiliate links that may bring me some money home if you sign up.

    Read the article

  • Keeping multiple root directories in a single partition

    - by intuited
    I'm working out a partition scheme for a new install. I'd like to keep the root filesystem fairly small and static, so that I can use LVM snapshots to do backups without having to allocate a ton of space for the snapshot. However, I'd also like to keep the number of total partitions small. Even with LVM, there's inevitably some wasted space and it's still annoying and vaguely dangerous to allocate more. So there seem to be a couple of different options: Have the partition that will contain bulky, variable files, like /srv, /var, and /home, be the root partition, and arrange for the core system state — /etc, /usr, /lib, etc. — to live in a second partition. These files can (I think) be backed up using a different backup scheme, and I don't think LVM snapshots will be necessary for them. The opposite: putting the big variable directories on the second partition, and having the essential system directories live on the root FS. Either of these options require that certain directories be pointers of some variety to subdirectories of a second partition. I'm aware of two different ways to do this: symlinks and bind-mounts. Is one better than the other for this purpose? Is there another option? Do any of the various Ubuntu installation media/strategies support this style of partition layout?

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • Partitioning During Installation, Reinstalling Ubuntu, Backup donwloaded Repositories

    - by user209645
    I am new to Ubuntu and I have just finished installing 13.10 amd64 on my PC. This will replace my Windows 7 as my only OS. I just want to clarify some issues that've been bugging me. I tried reading posts with the same topics but I just can't wrap my head to it yet. I partitioned my 80GB drive into: /root: 30GB (sorry for the confusion, I actually meant /) /home: 40GB /var : 3GB swap : 4GB (2GB of mem) Please correct me if I'm wrong about these: All of the users' documents are saved in their respective folders in /home. But say I want to clean install (format) Ubuntu, I don't need to make backups of /home and /var as they are on separate partitions. But when re-installing, do I just choose /root and format and it will recognize all the partitions (not making another /home and /var inside /root)? Downloaded packages (from all the repositories) and all their dependencies are saved in /var. So after re-installing on the same PC (assuming I'm offline), it will just use the latest updates in /var if I choose to update? And if all the installed apps and their dependents are all in there, all I need to do is re-install them without encountering errors? I have also read that you can back them up using aptoncd and then adding the DVD to the sources. So if I download all the high ranking apps using Synaptic, could I then have an all-in-one DVD installer? 30GB for / is excessive because the bulk of files will either be in /home (personal, downloads, music, videos) or /var (updates, packages, installed apps)? Please excuse me for asking such a question but I really want to explore and learn more.

    Read the article

  • How to recover data from a failing hard drive?

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Get the MakeUseOf eBook Guide to Hacker Proofing Your PC

    - by ETC
    If you’re interested in checking out a solid overview of PC security best practices and tips, our friends over at MakeUseOf.com have released another free book in their computer-oriented eBook series. The fifty-page ebook HackerProof: Your Guide to PC Security covers a variety of topics including types of malware, operating systems and their inherent vulnerabilities, security best practices, tools for protecting your PC, the importance of security prep and backups, and recovering from malware attacks. It’s a nice and compact text, perfect for brushing up on security best practices for your own machine or sending to friends and relatives that could use a little after-school tutoring on keeping their computer secure and out of trouble. The best tip from the book? The overall message to be cautious and be preemptive in your security efforts is a great meta-tip to take away. Up-to-date definition files and a healthy sense of random links and emails attachments goes a long, long way towards staying safe. HackerProof: Your Guide to PC Security [Direct Link via MakeUseOf] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client] Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper]

    Read the article

  • Database Schema Usage

    - by CrazyHorse
    I have a question regarding the appropriate use of SQL Server database schemas and was hoping that some database gurus might be able to offer some guidance around best practice. Just to give a bit of background, my team has recently shrunk to 2 people and we have just been merged with another 6 person team. My team had set up a SQL Server environment running off a desktop backing up to another desktop (and nightly to the network), whilst the new team has a formal SQL Server environment, running on a dedicated server, with backups and maintenance all handled by a dedicated team. So far it's good news for my team. Now to the query. My team designed all our tables to belong to a 3-letter schema name (e.g. User = USR, General = GEN, Account = ACC) which broadly speaking relate to specific applications, although there is a lot of overlap. My new team has come from an Access background and have implemented their tables within dbo with a 3-letter perfix followed by "_tbl" so the examples above would be dbo.USR_tblTableName, dbo.GEN_tblTableName and dbo.ACC_tblTableName. Further to this, neither my old team nor my new team has gone live with their SQL Servers yet (we're both coincidentally migrating away from Access environments) and the new team have said they're willing to consider adopting our approach if we can explain how this would be beneficial. We are not anticipating handling table updates at schema level, as we will be using application-level logins. Also, with regards to the unwieldiness of the 7-character prefix, I'm not overly concerned myself as we're using LINQ almost exclusively so the tables can simply be renamed in the DMBL (although I know that presents some challenges when we update the DBML). So therefore, given that both teams need to be aligned with one another, can anyone offer any convincing arguments either way?

    Read the article

  • Encrypted home breaks on login

    - by berkes
    My home is encrypted, which breaks the login. Gnome and other services try to find all sorts of .files, write to them, read from them and so on. E.g. .ICEauthority. They are not found (yet) because at that moment the home is still encrypted. I do not have automatic login set, since that has known issues with encrypted home in Ubuntu. When I go trough the following steps, there is no problem: boot up the system. [ctr][alt][F1], login. run ecryptfs-mount-private [ctr][alt][F7], done. Can now login. I may have some setting wrong, but have no idea where. I suspect ecryptfs-mount-private should be ran earlier in bootstrap, but do not know how to make it so. Some issues that may cause trouble: I have a fingerprint reader, it works for login and PAM. I have three keyrings in seahorse, containing passwords from old machines (backups). Not just one. Suggestion was that the PAM settings are wrong, so here are the relevant parts from /etc/pam.d/common-auth. # here are the per-package modules (the "Primary" block) auth [success=3 default=ignore] pam_fprintd.so auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config I am not sure about how this configuration works, but ut seems that maybe the*optional* in auth optional pam_ecryptfs.so unwrap is causing the ecryptfs to be ignored?

    Read the article

  • The Mystery of the Vanishing Disk Space

    - by Oddthinking
    My disk space is dwindling by about 2GB a day! I only have a few more days before I run out of space. $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 143G 126G 11G 93% / udev 491M 4.0K 491M 1% /dev tmpfs 200M 696K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 499M 144K 499M 1% /run/shm /dev/sda2 1.9G 580M 1.2G 33% /tmp /dev/sda1 92M 29M 58M 33% /boot I have been searching for the biggest directories/log files, deleting and compressing. But I am still losing the war. Finally, I realised I have a big misunderstanding: julian@server1:~$ sudo du -h / | tail -n 1 16G / All of my files in / only add up to 16 GB. That leaves 110 GB unaccounted for! Clearly I have a misunderstanding: I thought the '/dev/sda4' line represented all the files visible from '/'. What should I be reading to understand where the other storage has gone? More details: I have an Ubuntu 11.10 server, that was set-up by data-center staff. It is running my own code (which is fairly prolific with log files, but otherwise doesn't store much stuff on the drive) duplicity for backups (which tends to store a lot of signature files) various other standard services, like Apache, nagios, etc. They are very lightly used. It has been up for about 4 months without a reboot. I lied about the du output (simplified it for effect). It also complained about not being able to access GVFS and the du processes's own resources. I believe they are irrelevant: . du: cannot access `/home/julian/.gvfs': Permission denied du: cannot access `/proc/10841/task/10841/fd/4': No such file or directory du: cannot access `/proc/10841/task/10841/fdinfo/4': No such file or directory du: cannot access `/proc/10841/fd/4': No such file or directory du: cannot access `/proc/10841/fdinfo/4': No such file or directory

    Read the article

  • What can I do about rsync of large files killing my laptop's wifi connection

    - by David Dean
    When I run a rsync to backup my home folder over the network like so: rsync -avhz --progress --delete /home/dbdean/ [email protected]:/home/backups/david/ I seem to have problems with my quite large .VirtualBox/HardDisks/Windows XP.vdi file. Occasionally the wifi will silently fail (the transfer stops, and any other network access is broken). If I reconnect the wifi to my network before the transfer times out, it happily keeps going (and other network access is back), but I can't just leave it unattended most of the time, as I have to keep an eye on it. I'm guessing this is probably a bug in the wireless card related to a particularly high sustained volume of network usage, but I'm not really sure where to start with diagnosing this problem so that I can provide a good bug report. Or it could be something else, I guess. Any help would be appreciated. My network card is an Atheros Communications Inc. AR9285, as lspci -k shows: 43:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Hewlett-Packard Company Device 3040 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Free Team Foundation Service a Boon for Play Projects

    - by Ken Cox [MVP]
    My ‘jump in and sink or swim’ learning style leads me to create dozens of unbillable ‘play’ projects in Visual Studio 2012.  For example, I’m currently learning to customize the powerful auction/fixed price/classifieds software called AuctionWorx Enterprise.  I make stupid mistakes as I grasp a new API and configure projects. It’s frustrating to go down a rat hole and discover the VS IDE’s Undo doesn’t reach back to a working build from the previous weekend. Enter Visual Studio’s Free Team Foundation Service.  Put your play projects into the free source control and check in (or shelve) a snapshot before embarking on something risky. (I already use Discount ASP.NET’s Team Foundation Server hosting for client projects so there’s zero TFS/VS learning curve for me and first class GUI support.) TFS is free right now for teams of five or fewer. I’m not naive enough to expect ‘free’ to last forever. So, it’ll be interesting to see how much Microsoft intends to charge in 2013 for TFS. In the meantime, why not grab some free source code storage? BTW, it’s weird to realize that Microsoft is backing up my junky little playtime code on three physically-distinct servers every day - and taking incremental backups every hour.

    Read the article

  • Ubuntu 14.04, only wallpaper and a mouse cursor is showing

    - by Abhishek Verma
    I upgraded to Ubuntu 14.04 weeks ago, that was a fresh install. And as usual installed, Unity tweak tool, some themes, Chrome, eclipse etc. Today, while playing with Unity tweak tool, I mistakenly switched Window spread to off, and bang, no problem yet. After realizing what I did I switched it to on again and then, the title bar of all windows, the status bar (guessing it's called that), and the launcher disappeared. Now I was left with a mouse cursor and the Unity tweak tool window, without the title bar. I thought a restart is needed, but sadly no shutdown option to click on, moreover, the keyboard didn't worked, neither the hardware shutdown button. So, I pressed the reset button. After a mundane restart, I was welcomed by the wallpaper and the mouse cursor, moving, that's it. Searched everywhere for a working solution, found some solutions in AskUbuntu itself, but they where all old and required terminal to be used, but since my keyboard isn't working, no access to the terminal. So, ofcource the question isn't a dublicate, at least, IMHO. Also, I was working on an Android project and have got no backups, so I won't be going for a clean install, or else I will start crying watching my weeks of work being deleted, any solutions, please.

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • Disk failure is imminent Laptop Hard drive ~5 months old

    - by Drew
    There's another post about this, but I don't have enough 'points' to say anything on that thread. So I'll start my own ... with more details! My computer still boots, but gnome domain reports problems with HDD smart. This has been confirmed in the bios as it makes me press f1 to boot up now. I tried running HDD disk check in the bios, but it fails running the tests. As in, running the tests failed not that the tests themselves indicated a failed drive. Here is what disk utility is reporting as failing: Reallocated Sector Count FAILING Normalized: 132 Worst: 132 Threshold: 140 Value: 544 Current Pending Sector Count WARNING Normalized: 200 Worst: 1 Threshold: 0 Value: 2 Is this related to the insane number of DRDY errors on the drive? kernel: [51345.233069] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51345.233076] ata1.00: BMDMA stat 0x4 kernel: [51345.233081] ata1.00: failed command: READ DMA kernel: [51345.233090] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51345.233092] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51345.233097] ata1.00: status: { DRDY ERR } kernel: [51345.233103] ata1.00: error: { UNC } kernel: [51345.291929] ata1.00: configured for UDMA/100 kernel: [51345.291944] ata1: EH complete kernel: [51347.682748] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51347.682754] ata1.00: BMDMA stat 0x4 kernel: [51347.682759] ata1.00: failed command: READ DMA kernel: [51347.682768] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51347.682770] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51347.682774] ata1.00: status: { DRDY ERR } kernel: [51347.682777] ata1.00: error: { UNC } Did Ubuntu 10.10 and/or EXT4 eat my work laptop? What steps can I take to backup my important information, which is probably the home folder. Please include steps to recover my data on the new hard drive as well. It does me little good to have backups I can't use.

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

  • Why is the root partition on my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • Oracle Internet Directory 11gR1 11.1.1.6 Certified with Oracle E-Business Suite

    - by B Shashikumar
    We are very pleased to announce that Oracle Internet Directory 11gR1 (11.1.1.6) is now certified with Oracle E-Business Suite Releases 11i, 12.0 and 12.1. With this certification, we are offering several benefits to Oracle E-Business Suite customers: · Massive Scale: Oracle Internet Directory (OID) is a proven solution for mission critical deployments. OID can scale to extremely large deployments on less hardware as demonstrated by its published Two-Billion-User Benchmark. This reduces the footprint required to deploy enterprise directory services in the data-center resulting in cost savings and a greener enterprise. · Enhanced Security: OID is the most secure directory service that provides security at every level from data in transit to storage and backups. In addition to LDAP security, it leverages powerful Oracle database security features like Database Vault and Transparent Data Encryption · Investment Protection: This certification leverages Identity Management’s hot-pluggable capabilities enabling E-Business Suite customers to store and manage user identities in existing directory servers thus helping them maximize their investments For a complete matrix of platforms supported by Oracle Internet Directory and its components, refer to the Oracle Identity and Access Management 11gR1 certification matrix. For more information about this certification, check out the Oracle E-Business Suite blog. 

    Read the article

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • git for personal (one-man) projects. Overkill?

    - by Anto
    I know, and use, two version control systems: Subversion and git. Subversion, as of now, gets used for personal projects where I am the only developer and git gets used for open source projects and projects where I believe others will also work on the project. This is mostly because of git's amazing forking and merging capabilities, where everyone may work on their own branch; very handy. Now, I use Subversion for personal projects, as I think git makes little sense there. It seems to be a little bit of overkill. It is OK for me if it is centralized (on my home server, usually) when I am the only developer; I take regular backups anyway. I don't need the ability to make my own branch, the main branch is my branch. Yes, SVN has simple support for branching, but much more powerful support for it makes no sense, I think. Merging can be a pain with it, or at least from my little experience. Is there any good reason for me to use git on personal projects, or is it just simply overkill?

    Read the article

  • Make an object slide around an obstacle

    - by Isaiah
    I have path areas set up in a game I'm making for canvas/html5 and have got it working to keep the player within these areas. I have a function isOut(boundary, x, y) that returns true if the point is outside the boundary. What I do is check only the new position x/y separately with the corresponding old position x/y. Then if each one is out I assign them the past value from the frame before. The old positions are kept in a variable from a closure I made. like this: opos = [x,y];//old position npos = [x,y];//new position if(isOut(bound, npos[0], opos[1])){ npos[0] = opos[0]; //assign it the old x position } if(isOut(bound, opos[0], npos[1])){ npos[1] = opos[1]; //assign it the old y position } It looks nice and works good at certain angles, but if your boundary has diagonal regions it results in jittery motion. What's happening is the y pos exits the area while x doesn't and continues pushing the player to the side, once it has moved the player to the side a bit the player can move forward and then the y exits again and the whole process repeats. Anyone know how I may be able to achieve a smoother slide? I have access to the player's velocity vector, the angle, and the speed(when used with the angle). I can move the play with either angle/speed or x/yvelocities as I've built in backups to translate one to the other if either have been altered manually.

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • What You Said: How You Sync and Organize Your Bookmarks

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite techniques for synchronizing and organizing your browser bookmarks. Now we’re back to highlight the most popular techniques, tricks, and services. By far and away, Xmarks was the most frequently mentioned service. For the unfamiliar, Xmarks is a bookmark syncing service that is packed with features. Not only does Xmarks sync bookmarks between browsers and/or computers it also supports iOS, Android, and BlackBerry (mobile integration requires an upgrade to the premium account). In addition to syncing the bookmarks it also integrates with your search results so you can see how other Xmarks users have ranked sites within your search results. Steve-O-Rama highlights one of the many benefits of Xmarks: Xmarks seems to do the job for me. I’ve got a handful of machines, each with three or four browsers; over the years, I’ve accumulated thousands of bookmarks, stretching across many areas of interest. Trying to keep them all straight had been quite a struggle until Xmarks came along. I freaked out when the company was acquired by LastPass, but was subsequently relieved when they continued the free service. Xmarks has a very nice web interface to access, export, search, organize, and do many other things with your bookmarks. In this way, even if I’m on the go, I can access every bookmark I’ve made. Even so, I still make occasional local backups, directly from the browsers to a network folder. Delicious bookmarks, another veteran of the bookmark syncing services, had a fair number of supporters among the HTG readership. Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

  • LUKS no longer accepts my my passphrase

    - by Two Spirit
    I created a 4 drive RAID5 setup using mdadm and upgrading from 2TB drives to the new Hitachi 7200RPM 4TB drives. I can initially open my luks partition, but later can no longer access it. I can no longer access my LUKS partition even tho I have the right passphrases. It was working and then at an unknown point in time loose access to LUKS. I've used the same procedures for upgrading from 500G to 1TB to 1.5TB to 2TB. After the first time this happened a week ago, I thought maybe there was some corruption so I added a 2nd Key as a backup. After the second time the LUKS became unaccessible, none of the keys worked. I put LUKS on it using cryptsetup -c aes -s 256 -y luksFormat /dev/md0 # cryptsetup luksOpen /dev/md0 md0_crypt Enter LUKS passphrase: Enter LUKS passphrase: Enter LUKS passphrase: Command failed: No key available with this passphrase. The first time this happened while I was upgrading to 4TB drives, I thought it was a fluke, and ultimately had to recover from backups. I went an used luksAddKey to add a 2nd key as a backup. It happened again and I tried both passphrases, and neither worked. The only thing I'm doing differently this time around is that I've upgraded to 4TB drives which use GPT instead of fdisk. The last time I had to even reboot the box was over 2 years ago. I'm using ubuntu-8.04-server with kernel 2.6.24-29 and upgraded to -2.6.24-31, but that didn't fix the problem.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >