Search Results

Search found 892 results on 36 pages for 'mirror'.

Page 30/36 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • Looking for a small, portable, port-mirroring ethernet switch.

    - by user37244
    I recently had a mac go haywire, taking half a minute or more to get www.google.com loaded. Getting its owner to give up the machine for repair was like pulling teeth - they were insisting that it must be something to do with the network, since so much had changed with the local configuration at about the same time their box went haywire. I eventually set up a port mirror to a box that I could remote to so I could show that the mac was only irregularly getting packets onto the network. Demonstrating this faced an additional challenge: the latency of the remote desktop software I was using meant that I had to point to timestamps instead of just the moment the packet flashed up on the screen as my evidence. This particular user was the reason this was so challenging this time around, but I would like to have a box that I can cart from desk to desk to use wireshark on my laptop at any station where I need it. 3com, cisco, netgear, etc. (ad nauseum), all make switches that can be configured for port mirroring, but in my case, the smaller, the better. For the sake of my sanity, I'll probably end up running it off a battery anyway. If my laptop had two ethernet ports, this would be easy. So, whaddya recommand for a device that requires 0 configuration at each powerup (though I'm fine with poking at it for a while to set it up initially.) Small, light, and cheap enough to get it past purchasing? Thanks,

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • How I can recover files when the folder shows empty but the files are not deleted?

    - by Borror0
    Yesterday, my laptop caught a virus which caused massive damage. Since them, I have been trying to recover important files before reformatting my computer, a task the virus has not made easy. Restoration points predating the attack have been deleted. Most of my folders show empty. My Start menu is essentially empty, with the exception of Trillian and Mirror's Edge. The same goes for my Desktop, which only has programs which were installed after the attack. Searching for files though my computer is pretty much useless, as it only rarely brings up anything. I suspect most of my files have not been deleted. While my folders show empty, uTorrent still does display them and I can open them from here. Unfortunately, when I select Open Containing Folder, the folder still shows as completely empty even if I'm currently watching a video from that very folder. Further adding evidence to the not-deleted, just-missing theory, the data recovery software I'm using (Restoration) cannot find only find an handful of the missing files. If they were deleted, I could do a forensic recovery to get them back but since they're probably still somewhere on my computer, just out out of my reach, I can't find them. Under those circumstances, is there a way I can recover those files?

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • Creating a partitioned raid1 array for booting a debian squeeze system

    - by gucki
    I'd like to have the following raid1 (mirror) setup: /dev/md0 consists of /dev/sda and /dev/sdb I created this raid1 device using mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sda /dev/sdb This gave a warning about metadata being 1.2 and my system might not boot. I cannot use 0.9 because it restricts the size of the raid to 2TB and I assume grub shipped with latest debian (squeeze) should be able to handle metadata 1.2. So then I created the needed partitions like this: # creating new label (partition table) parted -s /dev/md0 mklabel 'msdos' # creating partitions sfdisk -uM /dev/md0 << EOF 0,4096 ,1024,S ; EOF # making root filesystem mkfs -t ext4 -L boot -m 0 /dev/md0p1 # making swap filesystem mkswap /dev/md0p2 # making data filesystem mkfs -t ext4 -L data /dev/md0p3 Then I mounted the root partition, copied a minimal debian install inside and temporary mounted /dev /proc /sys. Afer this I chrooted to the new root folder and executed: grub-install --no-floppy --recheck /dev/md0 However this fails badly with: /usr/sbin/grub-probe: error: unknown filesystem. Auto-detection of a filesystem of /dev/md0p1 failed. Please report this together with the output of "/usr/sbin/grub-probe --device-map=/boot/grub/device.map --target=fs -v /boot/grub" to I don't think it's a bug in grub (so I didn't report it yet) but a fault of mine. So I really wonder how to properly setup my raid1, everything I tried so far failed.

    Read the article

  • Disable internal display on Macbook Pro without closed lid mode?

    - by jslaker
    I have an early 2007 Macbook Pro running 10.5 that I've recently set up on a KVM with my primary desktop system. The problem I've run into is that I have a 20" 1680x1050 LCD, and OS X only provides options to mirror at the resolution of the built-in display or to span. Since the built-in display runs at 1440x900, this leads to running my LCD at non-native res and a fuzzy picture. There isn't any option that I can find to simply disable the built-in display entirely and run the external LCD at its native resolution. I am aware of closed lid mode, but the MBP was disassembled while in storage for about 6 months (took it apart to pull the HDD) and the cable to the touchpad, which controls the sleep sensor was damaged, meaning closed lid mode won't work. I've looked into replacing the cable, but the cheapest I've been able to find it is $75-100, and I'm trying not to invest any more money into this computer as it also has a completely dead battery and a few other minor problems. I've found the app SwitchResX which appears to allow you to do what I need, but it has a lot of functionality I don't need and a ~$20 registration charge attached to it. An odd set of circumstances, I'm aware, but I was hoping somebody might know of an OS hack that would let me just disable the internal display and be done with it. :)

    Read the article

  • Force request to miss cache but still store the response

    - by Tom Marthenal
    I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data. I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider). The timeline looks like this: 00:00:00: Cache flushed 00:00:00: Spider starts crawling to update cache with new pages 00:05:00: Spider finishes crawling, all pages are updated until 1:15 A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable. What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup). How can I do this?

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • Sync, share and backup policy using NAS

    - by Cue
    Trying to come up with a way to keep in sync while sharing and keeping a backup of my music/photos and movies. Currently I have an iMac in Greece and a MBP with me in the UK. As a result I've ended up with 2 iPhoto and iTunes libraries not to mention Documents scattered here and there, user settings etc. I also like to have a backup in case of a drive failure or the need to clean install. It seems that iPhoto and iTunes don't work really well with networked libraries. The way I think about it is to have a NAS where I keep my iTunes and iPhoto library but also rsync daily to my MBP to have a local copy. That way my files are shared across the network as well as act like a backup. In addition I get to have my files wherever I take my MBP but also have the ability to clean install. The tricky part comes from keeping in sync the iMac which is miles away. Again I'm considering a mirror setup (NAS, rsync to the iMac) as well as an rsync between the two NAS. It pretty much resembles the way Dropbox works, sans the requirement to go through their servers but I'm no "superuser" and don't really know if it is even feasible to have such a setup. Looks like there are so many things that can go wrong.

    Read the article

  • 750Gig Hard Drive shows full with only 315Gigs used

    - by Chris Kelly
    I have a Win7 laptop with a 750Gig C: drive. It came partitioned with 714Gig usable from manufacturer. I installed programs, music files, etc up to 285 gigs. As of a few weeks ago it showed 285 Gigs. Two weeks of house guests later and it shows HD is full. I deleted some files but it still shows 652 Gigs on this drive while there are only 285 Gigs on drive. Relevant details: I am Administrator on laptop and have fair knowledge of what I am doing. I did not restore from backup, restore from mirror, upgrade HD's or anything else that would have touched the partition structure. Just daily use as imaging machine and web. I have checked partitions under disk administrator - no change, still partitioned with 714Gigs usable. Have looked through computer C drive by hand showing Hidden files and folders - no change. I have used JDisk Report to double check - it shows I have only 285 Gigs on C drive. I triple checked with TreeSize run as Administrator and it also shows 285 Gigs on C drive - yet Windows 7 still shows almost full. I used Windows 7 Utilities to Check for Disk Errors, and Defragged the drive. No errors shown and no change after Defrag.

    Read the article

  • Dovecot starting and running, but not listening on any port

    - by Dženis Macanovic
    Among others things I'm in charge of a Debian GNU/Linux (Wheezy) DomU for the mail services of the company i work for. Yesterday one HDD that was used for this particular server has died. After installing Debian again, Dovecot decided to no longer listen on any ports (checked with netstat -l). Other services (like Postfix and MySQL) work without problems. dovecot -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-3-amd64 x86_64 Debian wheezy/sid ext3 auth_mechanisms = plain login disable_plaintext_auth = no first_valid_uid = 150 last_valid_uid = 150 mail_gid = mail mail_location = maildir:/var/vmail/%d/%n mail_uid = vmail namespace inbox { inbox = yes location = prefix = } pass db { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service pop3-login { inet_listener pop3s { port = 995 ssl = yes } } ssl_cert = </etc/ssl/private/mail.crt ssl_key = </etc/ssl/private/mail.key userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol imap { mail_max_userip_connections = 25 } UID 150 is vmail (I double checked file permissions). I didn't install Dovecot from source, but via apt from the official Debian US mirror. There are no messages concerning Dovecot in /var/log/syslog except for: Oct 21 06:36:29 server dovecot: master: Dovecot v2.1.7 starting up (core dumps disabled) Any ideas?

    Read the article

  • Why does my simple Raid 1 backup storage perform really slow sometimes?

    - by randomguy
    I bought 2x Samsung F3 EcoGreen 2TB hard disks to make a backup storage. I put them in Raid 1 (mirror) mode. Made a single partition and formatted it to NTFS, running Windows 7. For some reason, accessing the drive's contents (simply by navigating folders) is sometimes really slow. Like opening D:/photos/ can sometimes take several seconds before it starts showing any of the folder's contents. Same applies for other folders. What could be causing this and what could I do to improve the performance? I remember that there was an option somewhere inside Windows to choose fast access but less reliable persistence operations (read/write). It was a tick inside some dialog. At the time, it felt like a good idea to take the tick away from the option and get more reliable persistence but slower access, but now I'm regretting. I'm unable to find this dialog.. I've looked hard. I don't know, if it would make any difference. Oh, and I've ran scan disk and defrag on the drive. No errors and speed isn't improved.

    Read the article

  • How can I duplicate HBCD's XP boot loader with my MBR?

    - by Warpstone
    I'm stumped. I'm migrating a Win XP Lenovo T500 to an SSD: I copied the XP partition using EaseUS to the SSD. Aligned the boot sector using Gparted The MBR needs to be rebuilt (fair enough) However, all attempts to use the Windows Recovery console hang (both via a boot CD and even when the console was installed as a boot option). I've tried using a bunch of tools to rebuild/replace the MBR, but no dice. They all say the MBR has been fixed, but I cannot load Windows from the SSD. The HBCD's boot from windows option works just fine however. I'm confused as to what HBCD can do that my drive can't. How can I get that functionality on my SSD? Is it a MBR fix I can mirror? The SSD is extremely fast when I do use HBCD to boot up... but it would be nice to not need a token-based access to the machine! :) Note: I know, windows 7 may be worth a fresh install, but I'm trying to avoid the cost and hassle if possible.

    Read the article

  • Server not resolving after restart

    - by DomainSoil
    I restarted our server today, and now cannot for the life of me get anything to resolve... I suspect it has something to do with our routes. I've tried numerous Google results to no avail. Here is as far as I've gotten: [root@www ~]# route -n Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth1 0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth1 Things you need to know: Our server (CentOS 6.3) runs two virtual machines, one live, and one development. They mirror each other as much as possible, but I can't find where I've went wrong with the live server. The dev server works fine. [root@www ~]# ifconfig eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet6 addr: xxxx:xxx:xxxx:xxxx:xxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:118206 errors:0 dropped:0 overruns:0 frame:0 TX packets:165 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7825749 (7.4 Mib) TX bytes:7146 (69.2 KiB) Interrupt:28 [root@www ~]# /etc/init.d/network status Configured devices: lo Auto eth0 Currently active devices: lo eth1 If there is any other information you need, please don't hesitate to ask!

    Read the article

  • How to continue an HTTrack mirroring session from the command line?

    - by isme
    I want to drive my mirroring project using the Command Prompt instead of the WinHTTrack interface so that I can script and schedule the mirroring session more easily. The output of httrack --help gives a simple command for continuing an interrupted mirroring session: example: httrack --continue continues a mirror in the current folder When I try httrack --continue in my HTTrack project folder, all I get is output like this: Example: -%F "<!-- Mirrored from %s by HTTrack Website Copier/3.x [XR&CO'2010], %s -->" * Option %F needs to be followed by a blank space, and a footer string With each parameter on a new line for readability, the first line of my doit.log file looks like this: -qiC1%P0s0b0u1j0%s%u0N0%I0p1DaK0c1T30H0%kf2E1800A25000%c0.1%f#f -F "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -%F "" -%l "en, en, *" http://saa.gov.uk/search.php?SEARCHED=1&SEARCH_TABLE=council_tax&SEARCH_TERM=City+of+Edinburgh&DISPLAY_COUNT=100 -O1 "C:\\Users\\Iain\\Projects\\Council Tax Analysis\\Code\\HTTrack\\Council Tax Valuation List" -* \ +*search.php?SEARCHED=1* -*DISPLAY_MODE=FULL* The parameter %F "" should tell HTTrack to use an empty footer. I used the WinHTTrack interface to create the project and start the mirroring session. I can interrupt and continue the mirroring session using the interface. The HTML files saved by WinHTTrack have no footer.

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • Apache on CentOS 5.9 VM serves my optimized images corrupted (but my Mac doesn't)

    - by Robert K
    I'm using a Vagrant VM to mirror the client's environment as closely as I can. As part of our build process we do no optimization of assets early on; that comes as we're ready to take a site live. Needless to say, this issue is beginning to worry me as we need to take the site live very soon. I use ImageOptim to automate optimization of image assets, which runs a whole series of tools (Zopfli, PNGOUT, OptiPNG, AdvPNG, PNGCrush). I always set the optimizations to their maximum setting. After optimization, my PNGs start looking like this: What's weird is, if I serve the same file through my Mac's copy of Apache, not through Vagrant, the image loads fine. In fact, the only time it's ever corrupt like this is when the image is served from the Vagrant VM and its install of Drupal. All optimized JPEGs display only the first ~20% of the image. And PNGs, depending on the image, may show either a portion or the "progressive"-style corruption below. The browser itself makes no difference, the same browser will serve an uncorrupted image from my Mac's Apache instance and a corrupt image from the VM. When I disable all PNG optimizations except PNGCrush, and the removal of the PNG metadata, the image is served corrupted. I'm optimizing JPEG images with JPEGmini. The server is running CentOS 5.9, Apache 2.2.3-85, PHP 5.3.3, and Drupal 7. As best as I can tell the error lies somewhere within the VM, either with Apache or with (perhaps) the network stack. Seems like the tools that optimize the compression of the PNGs and JPEGs are what trigger this error. I've already determined that the .htaccess file isn't interfering with how the images load. What should I try to troubleshoot this?

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • apt unable to install particular version of package (but gives no error)

    - by Arc2009
    I'd like to install particular version of libstd++6 with following command: # apt-get install libstdc++6=4.9.0-8 -V Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libstdc++6 (4.8.2-16) 0 upgraded, 0 newly installed, 0 to remove and 216 not upgraded. It gives no error, but apt keeps version that was already installed. And also it refers to this package as "extra". There's no apt preferences set in /etc/apt/preferences.d. And the desirable version is definetely available through our local mirror. (If I try to run "apt-get download libstdc++6=4.9.0-8" it will download exactly desirable version.) System info: # cat /etc/issue.net "Debian GNU/Linux jessie/sid" # uname -a Linux www27 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux. # dpkg -l |egrep -i "apt|dpkg" ii apt 0.9.16.1 amd64 commandline package manager ii dpkg 1.17.6 amd64 Debian package management system Any suggestions?

    Read the article

  • How can I tell System Restore in WIndows 7 recovery console to use my recovered backup drive's restore point data?

    - by Rich Shealer
    My Windows 7 desktop PC failed to boot. It would get to a grayish screen with a mouse and would only respond to the power button. After much examination I found that the problem was not a failed drive as running CHKDSK from the Recovery Console on my main drives passed without any errors. I had been installing various Java version in the days before the failure so I decided to use a restore point to roll backwards. I have an external SATA drive controller with two 2 TB drives mirrored using the Windows mirroring function. My system has been backing up to this drive regularly. The problem is I accidently broke the mirror when testing to see if this drive system might have been causing my boot issue. Connecting it to another machine showed two dynamic drives that were invalid. In the end I reformatted one as an NTFS basic disc and used recovery software on the other to copy all of the files to the reformatted drive. I had to copy the restore points into the new drive's System Volume Information folder by granting rights to that user. I moved the drive back to the original machine and rebooted. I can see my new drive, it even uses the same drive letter as it did in normal mode. Running System Restore it lists a new Automatic Restore point created while sitting at the RC along with all of my backups. Selecting the backup I want (or any other) I get a dialog. The backup drive could not be found. System Restore is looking for restore points on your backup. Make sure the backup drive is on and connected to this computer and then click OK. What do I need to do to allow system restore to see the restore points?

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36  | Next Page >