Search Results

Search found 8979 results on 360 pages for 'backup sessions'.

Page 81/360 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • SQL: convert backup file from copy format to insert format

    - by takeshin
    I have a PostgreSQL backup made with PHPPgadmin using Export Copy (instead Copy SQL which is actually what I need). File contains entries like this: COPY tablename(id, field) FROM stdin; ... How to convert this file to SQL format? INSERT INTO tablename... I want to use Pgadmin to to import this file using execute SQL command.

    Read the article

  • Most secure way to generate a random session ID for a cookie?

    - by ensnare
    I'm writing my own sessions controller that issues a unique id to a user once logged in, and then verifies and authenticates that unique id at every page load. What is the most secure way to generate such an id? Should the unique id be completely random? Is there any downside to including the user id as part of the unique id?

    Read the article

  • Update just one field from backup

    - by justSteve
    I'm looking to restore one field from a backup and can't find the syntax for an update statement that can look at 2 different catalogs. Seems like it should be something fairly close to: update users set idUserCompany = (select idUserCompany from .myBackup.dbo.users uT) where uT.idUser = idUser

    Read the article

  • SQL: covert backup file from copy format to insert format

    - by takeshin
    I have a PostgreSQL backup made with PHPPgadmin using Export Copy (instead Copy SQL which is actually what I need). File contains entries like this: COPY tablename(id, field) FROM stdin; ... How to convert this file to SQL format? INSERT INTO tablename... I want to use Pgadmin to to import this file using execute SQL command.

    Read the article

  • BackUp MySql Database from CommandLine

    - by srk
    I am able to backup mysql database via command line by executing the below : C:\MySQL\MySQL Server 5.0\bin\mysqldump\" -uroot -ppassword sample \"D:/admindb/AAR12.sql\" But there is no DROP and CREATE database queries in my .mysql file What should i add in the syntax to get the create info to my generated .sql file ?

    Read the article

  • Git as a backup and Version Control System.

    - by gitnoob
    Hi. I want to use Git to backup my home drive, but I also want to use it as a version control system for projects that will be stored in my home drive. How would I go about doing that? Do I .gitignore all the projects root folders and make new repositories for them?

    Read the article

  • Restoring VM's in Hyper-V from a (somewhat) failed VSS backup...

    - by DeliriumTremens
    I attempted to do a backup of our virtualization server when moving from Hyper-V 2008 to R2. I changed the registry settings to register Hyper-V VSS with Windows Server Backup, and sent the backup on it's way while I went on to other things. Apparently the VSS portion didn't back-up the VM's like I had hoped, and after updating Windows Server and Hyper-V to R2, I was unable to restore the VM's correctly. The backup completed and I have all of the files from before the update backed up, but is there any way to restore the VM's from the backup? The vhd's all seem to have old modified dates, and when bringing the machines up from these vhd's they have very old settings. I have found a bunch of avhd files (named according to GUID), but I'm not sure if I can create a VM with the old vhd and merge the snapshots with it.

    Read the article

  • Restoring a backup SQL Server 2005 where is the data stored?

    - by sc_ray
    I have two Sql Server database instances on two different machines across the network. Lets call these servers A and B. Due to some infrastructural issues, I had to make a complete backup of the database on server A and robocopy the A.bak over to a shared drive accessible by both A and B. What I want is to restore the database on B. My first issue is to restore the backup on server B but the backup location does not display my shared drive. My next issue is that server B's C: drive has barely any space left and there are some additional partitions that have more space and can house my backup file but I am not sure what happens to the data after I restore the database on B. Would the backup data fill up all the available space on C:? It will be great if somebody explain how the data is laid out after the restore database is initiated on a target database server? Thanks

    Read the article

  • MSDN Live 2010 &ndash; Delivered : 24 sessions (4 x 6) on Visual Studio and Team Foundation Server

    - by terje
    We (Mikael Nitell and me) got a whole track on the Norwegian MSDN Live tour this year.  We did these as a pair, and covered 4 cities over 4 days, 6 sessions per day, taking 8 hours to come through it.  The Islandic volcano made the travels a bit rough, but we managed 6 flights out of 8. The first one had to go by van instead, 7-8 hour drive each way together with other MSDN Live presenters – a memorable tour! Oslo was the absolute top point.  We had to change hall to a bigger one. People were crowding, and even the big hall was packed!  The presentations were mostly based on demos, but we had a few slides as well.  They have been uploaded to my SkyDrive.  Info to aliens – some of the text may be Norwegian. The sessions were as follows: Overview of news in Visual Studio and Team Foundation server 2010 Ensuring Quality with VS/TFS 2010 Releasing products with VS/TFS 2010 No More No Repro with VS/TFS 2010 Performance Testing and Parallel Programming with VS/TFS 2010 Migrating to VS/TFS 2010 Tips, tricks, news and some best practices with VS/TFS 2010   In the coming days, I will post up examples from the demos too, with explanations of how they are intended to work. These entries will also contain stuff we had to remove from the actual presentations due to the time constraints. We managed to create recordings of two of the sessions, which will be uploaded to Channel 9 by Microsoft, afaik.   I will update this blog with information about exact locations when that is done. Also note we’re (read:Osiris Data AS) running both Upgrade and Deep Dive courses  on VS/TFS 2010 now in May.  Please look here for more info. If you want to be informed, follow me on Twitter.  All blog entries will be announced on twitter.

    Read the article

  • The Oracle Architects Training: 40 training sessions for our EMEA partners to build their Oracle Applications and Technical skills

    - by Richard Lefebvre
    There is a lot more to Oracle technology than meets the eye. Sure, you already belong to a small circle of our most experienced and committed partners. But are you making the best use possible of our technology solutions? Put it to the test. Join the “Oracle Partner Architects Training”. It is aimed at providing your experts, architects and consultants with in-depth architectural knowledge about Oracle technology. Here is your chance to learn from the best. Seasoned speakers, exclusive content and no product marketing. Oracle technology beyond the obvious. Mark your calendar The Oracle Partner Architects Training is an online training program. Sign up for the live Webex sessions (scheduled from January 2013 till April 2013) or watch replays as they become available. Feel free to follow training sessions at your own pace. Also, last year’s sessions are still very accurate and very available on architects.oraevents.eu NOTE: Looking to get your consultants Oracle certified? One more reason to join the Oracle Partner Architects Training. It is the fast track to getting their expertise validated with an Oracle certificate.

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • innobackupex - after restoring - quit without updating PID file

    - by clarkk
    After restoring a backup the server can't start.. restoring # tar -izxf /var/www/bak/db/2013-11-10-1437_mysql.tar.gz -C /var/www/bak/db_import # innobackupex --use-memory=1G --apply-log /var/www/bak/db_import # service mysql stop # mv /var/lib/mysql /var/lib/mysql-old # mkdir /var/lib/mysql # innobackupex --copy-back /var/www/bak/db_import # chown -R mysql:mysql /var/lib/mysql # service mysql start error log 131110 21:24:20 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 2013-11-10 21:24:21 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2013-11-10 21:24:21 6194 [Warning] Using pre 5.5 semantics to load error messages from /opt/mysql/server-5.6/share/english/. 2013-11-10 21:24:21 6194 [Warning] If this is not intended, refer to the documentation for valid usage of --lc-messages-dir and --language parameters. 2013-11-10 21:24:21 6194 [Note] Plugin 'FEDERATED' is disabled. /usr/local/mysql/bin/mysqld: Table 'mysql.plugin' doesn't exist 2013-11-10 21:24:21 6194 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 2013-11-10 21:24:21 6194 [Note] InnoDB: The InnoDB memory heap is disabled 2013-11-10 21:24:21 6194 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2013-11-10 21:24:21 6194 [Note] InnoDB: Compressed tables use zlib 1.2.3 2013-11-10 21:24:21 6194 [Note] InnoDB: Using Linux native AIO 2013-11-10 21:24:21 6194 [Note] InnoDB: Not using CPU crc32 instructions 2013-11-10 21:24:21 6194 [Note] InnoDB: Initializing buffer pool, size = 128.0M 2013-11-10 21:24:21 6194 [Note] InnoDB: Completed initialization of buffer pool 2013-11-10 21:24:21 6194 [Note] InnoDB: Highest supported file format is Barracuda. 2013-11-10 21:24:22 6194 [Note] InnoDB: 128 rollback segment(s) are active. 2013-11-10 21:24:22 6194 [Note] InnoDB: Waiting for purge to start 2013-11-10 21:24:22 6194 [Note] InnoDB: 5.6.12 started; log sequence number 636992658 2013-11-10 21:24:22 6194 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 2013-11-10 21:24:22 6194 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 2013-11-10 21:24:22 6194 [Note] Server socket created on IP: '127.0.0.1'. 2013-11-10 21:24:22 6194 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist 131110 21:24:22 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended mysql_upgrade /opt/mysql/server-5.6/bin/mysql_upgrade -u root -pxxxxx -P 3308 Warning: Using a password on the command line interface can be insecure. Looking for 'mysql' as: /opt/mysql/server-5.6/bin/mysql Looking for 'mysqlcheck' as: /opt/mysql/server-5.6/bin/mysqlcheck FATAL ERROR: Upgrade failed

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • How to move partition with Users to a new SATA controller?

    - by Al Kepp
    Our current situation: X79 chipset. Windows 7 installed on SATA SSD running on Marvell controller. C:\Users contains just Public and computer-name folders, all the other users are created in E:\Users. That partition is on two HDD's running on an Intel RAID controller. (The motherboard has got two SATA/RAID onboard controllers.) The goal is: Without reinstallation, I want to move SSD to a standard SATA controller. Then I want to get rid of RAID1, and move one HDD to the same internal Intel SATA controller and the current E:\Users move to that new place. I want it to stay as E:\users, but I need to reinstall the HDD to let it work in SATA mode without RAID. So I face several problems. I am sure all are solvable with free software utilities, but I don't know how exactly to do it. I can see the particular problems: I have got all users at E:\Users. When I turn off that E:\ disk, I won't be able to login to Windows. I need at least one admin account to be placed back at C:\users. The current C: runs on standard 120 GB SSD, but it is connected to a Marvell SATA/RAID controller. I am affraid the Windows won't let me put it to Intel SATA controller due to hardware/licence check, and I won't be able to use standard W7 recovery disk either, because there probably isn't marvel SATA/RAID driver on it. I haven't tried anything yet, because I am affraid I can end up with computer not working at all. (I want to move it to a standard ICH10 Intel SATA controller to let us have no problems in future with it. I think it is not very safe when we use any nonstandard hardware to boot the computer.) I need to somehow backup current E:\ disk and restore it to a new E:. I hope this will be the easy part (as long as the admin account will reside on C:). The E: RAID array is very large, but it is almost empty (less than 100 GB of data.) So I can make the partition smaller so it can easily fit to a single SATA HDD.

    Read the article

  • Best Practices for persisting iPod Playlist (MPMediaItemCollection) across sessions

    - by coneybeare
    When using in-app audio in the iPhone SDK, it is possible to allow users to select a list from their ipod library and create an in-app local playlist. If I want to persist this choice, it is easy to serialize the data and write to file, then recover. Just vanilla like this, however, leads me to think there is going to be something wrong. For example, what if the user syncs and removes sounds? I can loop across them all and query the iPod DB at setup time, but with lists that could be 50,000 long, this could take some time. How are other people doing this and what are some gotchas that I haven't though about?

    Read the article

  • Problem with sessions subdomains and authlogic in Rails.

    - by Alfred Nerstu
    I've got a rails app with authlogic authentication and a username.domain.com structure built with subdomain-fu. But my session breaks when going from domain.com to username.domain.com. I've tried to add config.action_controller.session = {:domain => '.localhost:3000'} to my development.rb but that seams to break authlogic disabling sign out/sign in. Any suggestions on what to do? Thanks in advance!

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >