Search Results

Search found 4740 results on 190 pages for 'split mirror'.

Page 12/190 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Using awk to split text file every 10,000 lines

    - by Sneaky Wombat
    I have a large gzip'd text file. I'd like to something like: zcat BIGFILE.GZ | awk (snag 10,000 lines and redirect to...)|gzip -9 smallerPartFile.gz the awk part up there, I basically want it to take 10,000 lines and send it to gzip and then repeat until all lines in the original input file are consumed. I found a script that claims to do this, but when I run it on my files and then diff the original to the ones that were split and then merged, lines are missing. So, something is wrong with the awk part and I'm not sure what part is broken. Here's the code. Can someone tell me why this doesn't yield a file that can be split and merged and then diff'd to the original successfully? # Generate files part0.dat.gz, part1.dat.gz, etc. # restore with: zcat foo* | gzip -9 > restoredFoo.sql.gz (or something like that) prefix="foo" count=0 suffix=".sql" lines=10000 # Split every 10000 line. zcat /home/foo/foo.sql.gz | while true; do partname=${prefix}${count}${suffix} # Use awk to read the required number of lines from the input stream. awk -v lines=${lines} 'NR <= lines {print} NR == lines {exit}' >${partname} if [[ -s ${partname} ]]; then # Compress this part file. gzip -9 ${partname} (( ++count )) else # Last file generated is empty, delete it. rm -f ${partname} break fi done

    Read the article

  • Best way to replicate / mirror 100s of databases in SQL 2005

    - by mrwayne
    Hi, I currently host around 400-500 SQL 2005 databases of varying sizes (1-10 gig) each. I am aware of most of the different methods available and the general pros/cons of mirroring, log shipping, replication and clustering, but i am not aware of how well they tend to perform when its employed at the size i have specified (400-500 unique databases). Does anyone have any good advice on what is likely the best method for having the ability to fail over to another server with this sort of setup? Fail over does not need to be immediate, i'm just looking for something better than taking backups every day and moving them to storage. I'm preferably looking for something that would also makes it easy to manage the databases in bulk (as opposed to one at a time). Thanks for your input!

    Read the article

  • Dual Monitor difficulties (VirtualBox ubuntu host) - rdesktop sessions mirror

    - by rukus5
    I am running ubuntu 9.10 host with a Windows guest and need to extend my guest windows desktop into the second monitor (otherwise I will have to convert to a dual boot situation because this is a work furnished computer, please HELP!!) Current Situation: Windows Guest Running with VRDP enabled and successfully connecting. Guest Additions running and VBox set to 2 monitors and I see two monitors in display settings. connecting via 2 different rdesktop sessions mirrors the display. even though display settings of Guest Windows is set to extend desktop. is there a rdesktop option to signify to the VBox it is the second display? I need the second connection be the second display. any ideas?

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites. UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js, which reads JLSSiteMap.js and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • setting up a proxy to mirror an SSH SOCKS connection

    - by aresnick
    I have two remote machines, remote1 and remote2. remote2 is only running sshd, and I can't run anything else on it. remote1 is a full-fledged server to which I have complete access. I can run a SOCKS proxy on remote2 via ssh -f -N -D *:8080 me@remote2 which lets me expose a SOCKS proxy on port 8080 on remote1. I'd like to authenticate this so that the proxy isn't sitting open. How can I do this? It seems like I should be able to use delegate, but I can't even seem to get its HTTP proxy functionality working. When I run delegated -r -P8081 SERVER=http PERMIT="*:*:*" REMITTABLE="*" I can't even get it to work on port 8081. Anyway, I was hoping someone could point me in the right direction to let me authenticate access to the SOCKS proxy connection? That is, I want to be able to point my browser's proxy at remote1 and browse the internet through the SSH SOCKS proxy/tunnel to remote2. squid doesn't support a SOCKS parent =( Thanks!

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • Robocopy Mirror Backup gone awry

    - by Aznfin
    I have created a simple batch file script for running Robocopy. It is set to make a backup of my user account folder to my external hard drive. Here's the parameters for Robocopy: ROBOCOPY "C:\Users\Finnly" "F:\Backups\Finnly (Backup)" /ZB /COPY:DAT /DCOPY:T /MIR /256 /MT:32 /XF *.log *.log* *.dat *.tmp *.temp *.old "ntuser*" "SyncToy*" "UpgKit.txt" ".recently-used.xbel" /XD ".gimp-2.6" ".thumbnails" ".VirtualBox" "AppData" "Application Data" "Adobe" "Camtasia Studio" "Cookies" "CyberLink" "DivX Movies" "DVD Architect Pro 5.0 Projects" "dwhelper" "GTA San Andreas User Files" "Lightroom" "Local Settings" "NetHood" "PrintHood" "Scripts" "temp" "Templates" "The KMPlayer" "Tracing" /R:3 /W:10 /V /TS /FP /ETA /LOG+:F:\Backups\Sync.log /TEE For some reason when I run it, it backs up the files and then it seems to back them up again. The size of my user account directory is 18.3 GB but the backup of it occupies over 30 GB. After reading the contents of the log generated, it is obvious that it's copying files more than once. Why is this happening? I'm running Windows Seven Home Premium 64-bit.

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org//ht/d/sp/i/17770

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites.

    Read the article

  • Excluding certain file types in wget

    - by Alan Spark
    I have been using wget for a while now to mirror files from an ftp server to a local folder. My wget command is as follows: wget -mirror -w 1 -p -nH -P /var/www/ ftp://my-ftp-server However, I just noticed that it is copying over a .listing file for every folder that it visits. So, even if nothing has been changed on the ftp server, a .listing file will always be copied. My understanding is that the .listing file is created when wget opens the ftp session. Is there a way to avoid this? I've tried the -R option (e.g. -R .listing) but this didn't help. See: http://www.gnu.org/software/wget/manual/wget.html#Recursive-Accept_002fReject-Options Thanks, Alan

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • Storage replication/mirror over WAN

    - by galitz
    Hello, We are looking at storage replication between two data centers (600km apart) to support an active-passive cluster design for disaster recovery. The OS layer will be mostly Windows Server 2003/2008 with some OpenSuSE Linux used for performance monitoring on VMWare or possibly XenServer. The primary application service to replicate is Nvision. Datacenter 1 will have two storage systems for local active-passive or perhaps active-active replication with Datacenter 2 used as a last resport disaster recovery site. We have a handle on most aspects, but I am looking for specific recommendations on storage platforms that can handle remote replication cleanly. Thanks.

    Read the article

  • How do I mirror a MySQL database?

    - by user45745
    I'm running two load balanced servers for one website, and I'd like the databases to be synchronized. Queries may be run on either of the two servers because they are both production sites, so the replication can't just work one way. It doesn't have to be in real-time, just fairly accurate so people don't notice a difference when they get switched to a different server.

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • Mirror network packets from WiFi to Ethernet in an ASUS Router RT N53

    - by fazineroso
    I have an ASUS RT N53 router, running the default firmware (Linux 2.6.22 with busybox and uclibc). I need to capture data packets from some Wi-Fi devices I have connected to that router (iPad and some smartphones), but the router is not forwarding any package coming from Wi-Fi devices to the Ethernet Ports. Any idea how can I proceed? Available tools in the router are iptables (no tee option, though), ebtables, brctl... Currently the ethernet and Wifi devices are forming a bridge: # brctl show bridge name bridge id STP enabled interfaces br0 8000.50465dc06be2 no vlan0 eth1 No ebtables rules: # ebtables -L Bridge table: filter Bridge chain: INPUT, entries: 0, policy: ACCEPT Bridge chain: FORWARD, entries: 0, policy: ACCEPT Bridge chain: OUTPUT, entries: 0, policy: ACCEPT

    Read the article

  • Setting Boot and Mirror Disks correctly at the Solaris OBP

    - by Shaun Dewberry
    I am recovering a domain that was lost due to power outage on an Sun Fire E25K server. I know how to set the appropriate parameters at the openboot prompt using nvalias/devalias, boot etc. However, I do not understand how one gets from the output of show-disks {1a0} ok show-disks a) /pci@1dd,600000/SUNW,qlc@1/fp@0,0/disk b) /pci@1dd,700000/SUNW,qlc@1/fp@0,0/disk c) /pci@1dc,700000/pci@1/pci@1/scsi@2,1/disk d) /pci@1dc,700000/pci@1/pci@1/scsi@2/disk e) /pci@1bd,600000/SUNW,qlc@1/fp@0,0/disk f) /pci@1bd,700000/SUNW,qlc@1/fp@0,0/disk g) /pci@1bc,700000/pci@1/pci@1/scsi@2,1/disk h) /pci@1bc,700000/pci@1/pci@1/scsi@2/disk q) NO SELECTION Enter Selection, q to quit: to the correct full disk path. I know it is basically one of the pci/scsi paths listed above, but in all instruction or examples a string of additional characters is appended to the path to specify Targets and Units but the explanation of the path construction is never given. Could someone please explain how to construct this disk path correctly?

    Read the article

  • How do I mirror a MySQL database?

    - by user45745
    I'm running two load balanced servers for one website, and I'd like the databases to be synchronized. Queries may be run on either of the two servers because they are both production sites, so the replication can't just work one way. It doesn't have to be in real-time, just fairly accurate so people don't notice a difference when they get switched to a different server.

    Read the article

  • Clone/mirror a textbox in a PDF form

    - by Tim Pietzcker
    I've got a PDF form in Acrobat X Pro where users can enter their name in a textbox on the first page. I would like the contents of that box to be cloned/mirrored in another box on the second page of the same form: However, in the Properties dialog of the second textbox, I can't find a way to reference the first one. I do have options to calculate numerical values and perform validation etc. etc., but I can't seem to simply have it display the contents of another textbox. Is this not possible in PDF forms, or am I overlooking something obvious?

    Read the article

  • How to: mirror a staging server from a production server

    - by Zombies
    We want to mirror our current production app server (Oracle Application Server) onto our staging server. As it stands right now, various things are out of sync, and what may work in testing/QA can easily fail in production because of settings/patch/etc inconsistencies. I was thinking what would be best is to clone the entire disk daily and push it onto the staging server... Would this be the best method...?

    Read the article

  • The method split(String) is undefined for the type String

    - by pi
    I am using Pulse - the Plugin Manager for Eclipse and installed. I have the Eclipse 3.5 for mobile development(Pulsar) profile with a couple other profiles. I realized that the split() method called on a string from code such as below: String data = "one, two, three, four"; data.split(","); generates the error: "The method split(String) is undefined for the type String". I am aware that the split() method did not exist before Java's JRE 1.4 and perhaps could be the cause of the problem. The problem is I don't think I have jre/sdk versions installed. Perhaps there's one in-built with the Pulsar profile and needs editing - but I couldn't tell what settings (and where) needs tweaking. I have checked WindowsPreferencesJavaInstalled JREs and it's set to = jre1.4. Please help thanks.

    Read the article

  • How to split a string of words and add to an array - Objective C

    - by user1412469
    Let's say I have this: NSString *str = @"This is a sample string"; How will I split the string in a way that each word will be added into a NSMutableArray? In VB.net you can do this: Dim str As String Dim strArr() As String Dim count As Integer str = "vb.net split test" strArr = str.Split(" ") For count = 0 To strArr.Length - 1 MsgBox(strArr(count)) Next So how to do this in Objective-C? Thanks

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >