Search Results

Search found 892 results on 36 pages for 'mirror'.

Page 4/36 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • got a question about Linked-List in java code.

    - by glacier89
    Linked-List: Mirror Consider the following private class for a node of a singly-linked list of integers: private class Node{ public int value; public Node next; } A wrapper-class, called, ListImpl, contains a pointer, called start to the first node of a linked list of Node. Write an instance-method for ListImpl with the signature: public void mirror(); That makes a reversed copy of the linked-list pointed to by start and appends that copy to the end of the list. So, for example the list: start 1 2 3 after a call to mirror, becomes: start 1 2 3 3 2 1 Note: in your answer you do not need to dene the rest of the class for ListImpl just the mirror method.

    Read the article

  • Mirror a website with httrack while executing javascript

    - by Martin
    I want do save a mirror of www.youtube.com/tv. I obviously do not want to save the videos. I want the code running the website in a local copy, everything else can stay remote. The code I want is mainly contained in 2 files: live.js and app-prod.js. I tried using httrack. I have issue parsing the javascript to load anything past the first file: live.js. The %P parameter does not help. httrack www.youtube.com/tv +* -r6 --mirror -%P -j It doesn't go further than live.js because some javascript needs to be executed to load the next file. I know I can do this manually with any browser. I want to automate the process. Is httrack able to do this by itself? If yes, how?

    Read the article

  • 301 redirects mirrored domain

    - by Dave
    I'm redesigning a site for a friend on my localhost. His old site is an .asp based site and we're replacing it with a WordPress site on LAMP hosting. The old site sits on domain A and also has another domain, domain B parked on top of it mirroring it. Google has picked up domain B for most of his search engine results and yahoo and bing etc have picked up domain A. The plan is to 301 redirect the the old pages of his site on domain A to the new WordPress versions and park domain B on top of it like before. My question is, will this work, if not what would be a better way to approach it? We'd prefer not to lose any of the search engine listings in the redesign, and the search engines don't appear to have penalized him for duplicate content. Thanks very much in advance!

    Read the article

  • Another website is mirroring and ranks above my site in search results

    - by Marlboro Goodluck
    There is a site of ill-repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log files and noticed that this site has been crawling mine for sometime, and also has 10,000 links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • Another website is mirroring my site

    - by Marlboro Goodluck
    Question for you all. There is a site of ill repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log file and noticed that this site has been crawling mine from sometime, and also has 10k links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • Best way to replicate / mirror 100s of databases in SQL 2005

    - by mrwayne
    Hi, I currently host around 400-500 SQL 2005 databases of varying sizes (1-10 gig) each. I am aware of most of the different methods available and the general pros/cons of mirroring, log shipping, replication and clustering, but i am not aware of how well they tend to perform when its employed at the size i have specified (400-500 unique databases). Does anyone have any good advice on what is likely the best method for having the ability to fail over to another server with this sort of setup? Fail over does not need to be immediate, i'm just looking for something better than taking backups every day and moving them to storage. I'm preferably looking for something that would also makes it easy to manage the databases in bulk (as opposed to one at a time). Thanks for your input!

    Read the article

  • Dual Monitor difficulties (VirtualBox ubuntu host) - rdesktop sessions mirror

    - by rukus5
    I am running ubuntu 9.10 host with a Windows guest and need to extend my guest windows desktop into the second monitor (otherwise I will have to convert to a dual boot situation because this is a work furnished computer, please HELP!!) Current Situation: Windows Guest Running with VRDP enabled and successfully connecting. Guest Additions running and VBox set to 2 monitors and I see two monitors in display settings. connecting via 2 different rdesktop sessions mirrors the display. even though display settings of Guest Windows is set to extend desktop. is there a rdesktop option to signify to the VBox it is the second display? I need the second connection be the second display. any ideas?

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites. UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js, which reads JLSSiteMap.js and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

  • setting up a proxy to mirror an SSH SOCKS connection

    - by aresnick
    I have two remote machines, remote1 and remote2. remote2 is only running sshd, and I can't run anything else on it. remote1 is a full-fledged server to which I have complete access. I can run a SOCKS proxy on remote2 via ssh -f -N -D *:8080 me@remote2 which lets me expose a SOCKS proxy on port 8080 on remote1. I'd like to authenticate this so that the proxy isn't sitting open. How can I do this? It seems like I should be able to use delegate, but I can't even seem to get its HTTP proxy functionality working. When I run delegated -r -P8081 SERVER=http PERMIT="*:*:*" REMITTABLE="*" I can't even get it to work on port 8081. Anyway, I was hoping someone could point me in the right direction to let me authenticate access to the SOCKS proxy connection? That is, I want to be able to point my browser's proxy at remote1 and browse the internet through the SSH SOCKS proxy/tunnel to remote2. squid doesn't support a SOCKS parent =( Thanks!

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • Robocopy Mirror Backup gone awry

    - by Aznfin
    I have created a simple batch file script for running Robocopy. It is set to make a backup of my user account folder to my external hard drive. Here's the parameters for Robocopy: ROBOCOPY "C:\Users\Finnly" "F:\Backups\Finnly (Backup)" /ZB /COPY:DAT /DCOPY:T /MIR /256 /MT:32 /XF *.log *.log* *.dat *.tmp *.temp *.old "ntuser*" "SyncToy*" "UpgKit.txt" ".recently-used.xbel" /XD ".gimp-2.6" ".thumbnails" ".VirtualBox" "AppData" "Application Data" "Adobe" "Camtasia Studio" "Cookies" "CyberLink" "DivX Movies" "DVD Architect Pro 5.0 Projects" "dwhelper" "GTA San Andreas User Files" "Lightroom" "Local Settings" "NetHood" "PrintHood" "Scripts" "temp" "Templates" "The KMPlayer" "Tracing" /R:3 /W:10 /V /TS /FP /ETA /LOG+:F:\Backups\Sync.log /TEE For some reason when I run it, it backs up the files and then it seems to back them up again. The size of my user account directory is 18.3 GB but the backup of it occupies over 30 GB. After reading the contents of the log generated, it is obvious that it's copying files more than once. Why is this happening? I'm running Windows Seven Home Premium 64-bit.

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org//ht/d/sp/i/17770

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites.

    Read the article

  • Excluding certain file types in wget

    - by Alan Spark
    I have been using wget for a while now to mirror files from an ftp server to a local folder. My wget command is as follows: wget -mirror -w 1 -p -nH -P /var/www/ ftp://my-ftp-server However, I just noticed that it is copying over a .listing file for every folder that it visits. So, even if nothing has been changed on the ftp server, a .listing file will always be copied. My understanding is that the .listing file is created when wget opens the ftp session. Is there a way to avoid this? I've tried the -R option (e.g. -R .listing) but this didn't help. See: http://www.gnu.org/software/wget/manual/wget.html#Recursive-Accept_002fReject-Options Thanks, Alan

    Read the article

  • Storage replication/mirror over WAN

    - by galitz
    Hello, We are looking at storage replication between two data centers (600km apart) to support an active-passive cluster design for disaster recovery. The OS layer will be mostly Windows Server 2003/2008 with some OpenSuSE Linux used for performance monitoring on VMWare or possibly XenServer. The primary application service to replicate is Nvision. Datacenter 1 will have two storage systems for local active-passive or perhaps active-active replication with Datacenter 2 used as a last resport disaster recovery site. We have a handle on most aspects, but I am looking for specific recommendations on storage platforms that can handle remote replication cleanly. Thanks.

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • How do I mirror a MySQL database?

    - by user45745
    I'm running two load balanced servers for one website, and I'd like the databases to be synchronized. Queries may be run on either of the two servers because they are both production sites, so the replication can't just work one way. It doesn't have to be in real-time, just fairly accurate so people don't notice a difference when they get switched to a different server.

    Read the article

  • Mirror network packets from WiFi to Ethernet in an ASUS Router RT N53

    - by fazineroso
    I have an ASUS RT N53 router, running the default firmware (Linux 2.6.22 with busybox and uclibc). I need to capture data packets from some Wi-Fi devices I have connected to that router (iPad and some smartphones), but the router is not forwarding any package coming from Wi-Fi devices to the Ethernet Ports. Any idea how can I proceed? Available tools in the router are iptables (no tee option, though), ebtables, brctl... Currently the ethernet and Wifi devices are forming a bridge: # brctl show bridge name bridge id STP enabled interfaces br0 8000.50465dc06be2 no vlan0 eth1 No ebtables rules: # ebtables -L Bridge table: filter Bridge chain: INPUT, entries: 0, policy: ACCEPT Bridge chain: FORWARD, entries: 0, policy: ACCEPT Bridge chain: OUTPUT, entries: 0, policy: ACCEPT

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >