Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 727/956 | < Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • bind9 - forwarders are not working

    - by Sarp Kaya
    I am experiencing an issue with bind. If i want to resolve any domain name that is on the zone file. It works fine. However, when I try to resolve anything that does not belong to the zone file. I know that actual DNS servers that are being forwarded are working fine. But somehow bind9 fails to use them. The content of /etc/bind/named.conf.options is: options { directory "/var/cache/bind"; forwarders { 131.181.127.32; 131.181.59.48; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; I have also tried to use only one ip address and it still did not work. also the content of /etc/bind/named.conf is: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; So there is no problem with including options file. Any recommendations for fixing this problem?

    Read the article

  • How can I use HAproxy with SSL and get X-Forwarded-For headers AND tell PHP that SSL is in use?

    - by Josh
    I have the following setup: (internet) ---> [ pfSense Box ] /-> [ Apache / PHP server ] [running HAproxy] --+--> [ Apache / PHP server ] +--> [ Apache / PHP server ] \-> [ Apache / PHP server ] For HTTP requests this works great, requests are distributed to my Apache servers just fine. For SSL requests, I had HAproxy distributing the requests using TCP load balancing, and it worked however since HAproxy didn't act as a proxy, it didn't add the X-Forwarded-For HTTP header, and the Apache / PHP servers didn't know the client's real IP address. So, I added stunnel in front of HAproxy, reading that stunnel could add the X-Forwarded-For HTTP header. However, the package which I could install into pfSense does not add this header... also, this apparently kills my ability to use KeepAlive requests, which I would really like to keep. But the biggest issue which killed that idea was that stunnel converted the HTTPS requests into plain HTTP requests, so PHP didn't know that SSL was enabled and tried to redirect to the SSL site. How can I use HAproxy to load balance across a number of SSL servers, allowing those servers to both know the client's IP address and know that SSL is in use? And if possible, how can I do it on my pfSense server? Or should I drop all this and just use nginx?

    Read the article

  • MS Word custom dictionary making spellcheck slow - ideas?

    - by ezuk
    I have a user who edits technical materials. She uses MS Word's Custom Dictionary all the time for spelling; it has grown very large, and is now making spell check very slow. All of the advice I've read online says to disable the custom dictionary. This is an easy solution, but is not workable for the user, because she actually needs this dictionary. So, is there any way to optimize the custom dictionary and/or Word itself, so that a large dictionary file doesn't slow things down quite so badly? Many thanks. Update after suggestions: I ran contig on the file, and it reports just 1 frag, so that's not the issue I think. The file is 9.95KB -- 1,117 lines, each consisting of just a single word. I viewed the file using Notepad and none of the lines seems corrupted, strange, or overly long (no line seems to be over 10 chars or so). Both of your suggestions were helpful so I will upvote both; any further tips would be most welcome.

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • Multiple Settings for One Monitor

    - by Lynda
    I have my monitor set to the way I like it, it shows colors accurately and not overly bright (white doesn't blind me). However I edit photos for the web and this is great for my screen but I have noticed when looking at other screens my photos will at times looked washed out and not a good presentation. If I manually change my monitor settings to be brighter (and a few other tweaks) I can replicate the look of other monitors close enough to help prevent washed out/blown out photos. Here is the issue, I need to be able to change settings between the two fairly easily. To manually set my monitor for one then back to the other can be time consuming and a pain to set right every time. I would like to be able to preset certain settings (in software) that I can quickly switch between by pressing hotkeys or clicking shortcut on my desktop. I have an AMD video card and have attempted to use the AMD VISION Engine Control Center to add presets then tweak the settings for each. However when I switch between the presets the settings are the same. How do I build presets using the VISION Engine Control Center that actually works (maybe I misunderstand what it is saving?) or is there a way to do this in Windows 7 or free software that actually works decently for this purpose?

    Read the article

  • Diagnose remote desktop freezes in Windows 7 when no BSOD?

    - by Paul Smith
    Okay, I'm getting no joy from Asus or Microsoft on this, so hoping for some clues on how to narrow down the cause. I have very frequent OS freezes, always & only when running Remote Destkop Client (mstsc) in Windows 7 x64. I never have a bluescreen, and there is never a minidump. The display & input just freezes -- no keyboard, no mouse, and sound will just continue the last wavelength if any. So far, I can't find a way to trap the hang given that there's no bluescreen; advanced startup & recovery settings for system failure are "Write an event" checked, "Automatically restart" checked, and "Kernel memory dump". I've updated to the lasted BIOS, and tried a few different graphics drivers, both generic & ATI. I've also tried disabling Aero, and everything about the remote desktop experience (incrementally unchecked every box in the mstsc - options - experience tab), even disabled/unplugged external monitor to make sure it wasn't a dual-monitor issue. My specs are: Asus G73jh notebook 8GB RAM ATI Mobility Radeon HD 5800 Series graphics (recently tried driver versions 8.791.0.0, 8.801.0.0) American Megatrends G73jh.211 BIOS (7/27/2010) Windows 7 Home Premium x64 Windows Memory Diagnostic passed all of the following at least 3 times with no errors: MATS+ INVC LRAND Stride6 WMATS+ WINVC This notebook is better than most at removing heat (laudable vent design), so I'm not inclined to suspect thermal causes (especially since running 1080p video for hours has never caused a freeze, but mstsc does, reliably, within 5 minutes to an hour). This did seem to start happening after a Windows Update, but I've since reverted every patch applied since a week before the first occurrence, with no joy. (And I'd only had the PC for a couple weeks before that, so it could have been chance + less actual time spent remoting at the beginning.) I'm at my wits end, and I bought this laptop primarily as a remote terminal client (go figure, right?) Any ideas on how to identify the cause of this? Thanks!

    Read the article

  • Computer hangs at BIOS screen. Cannot enter setup.

    - by d2jxp
    I have an HP Pavilion a6500f (it's a year out of warranty) and it's hanging on the blue HP BIOS screen. If I mash F10 while it's starting up, it will say "Entering Setup..." but I will see no results. It will hang there and not do anything. If I actually wait until I can see the screen and then hit F10, there's no response at all and the computer will sit at the BIOS menu. I've dusted and cleaned it out, reseated the memory, switched the RAM slots, and reset the CMOS battery using the reset jumper. I'm out of ideas. I'm pretty sure it's not a hard drive issue, since my problem is at the BIOS. After this post, I'll disconnect the hard drive and try to just boot without it. Anyone have any other ideas? Edit: Okay, so I tried disconnecting the hard drive and now I can get back into the BIOS. I reconnected it and I'm locked out again. So the problem is my hard drive.. I guess I should delete this post unless someone has any ideas as to what's wrong with the drive?

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • How to fix GMail time stamps in Outlook?

    - by SWB
    One of my email accounts is hosted at an ISP with unreliable IMAP support, and I can't change it. Fortunately, I have my personal email set up on Google Apps for Domains, so I created another GMail account there and turned on GMail's features that allow me to send and receive mail through the ISP account using GMail ("Send mail as" and "Get mail from other accounts" in GMail settings on the Accounts tab). I'm now using Outlook to retreive mail from the GMail account through IMAP, which in turn is retreiving mail from the ISP account through POP3. This basically works great, except for one very significant issue: Prior to setting this up, I already had several months of mail in the ISP account that I had been accessing via IMAP. GMail grabbed all of this mail via POP3 at, let's say, noon on April 5. In GMail's web interface (and on my iPod touch, and in Mozilla Thunderbird), all is well: the messages are all shown with their original time stamps. But when Outlook downloads these messages from GMail via IMAP, the time stamps are all set to noon on April 5 (the time GMail downloaded them from the ISP via POP3). That's not good, especially since we're talking about hundreds of messages here over a time span of several months. How can I fix this and get Outlook to display the original time stamps?

    Read the article

  • OS X won't boot up unless I hold down option key

    - by Gazzer
    I have a strange issue on an early 2008 Mac Pro running OS 10.6: if I restart the computer it restarts normally if I shutdown and boot, it stops at the grey screen just before the boot process if I shutdown and boot but hold down the option key, I can select the boot disk and all is good. I've just cloned the disk, and the same thing happens. The disk is a SAMSUNG HD154UI The disk is partitioned (the second partition holds a clone of the Snow Leopard Install disk) One weird thing on the original disk was one of the partitions said 'EFI Boot' in a non-aliased font rather than the name of the disk when the disks are listed upon holding down option. Solution: it seems that there was a problem with the disk. Part of the difficulty in finding the solution was that you need to remove the disk from the computer completely. For example, a good disk in Bay 3, wouldn't boot up if the bad disk was in Bay 2. So for ages I thought the problem was hardware related in Bay 3. So if you think you have a dodgy disk remove it totally if you are testing the hardware with a 'clean' disk. Cleaning the PRAM helped to get the new disk to work too.

    Read the article

  • ping incorrectly pinging 127.0.0.1

    - by AlexW
    I've got an odd DNS issue. I'm running a dual ipv4/ipv6 environment on Linux. Pinging some sites results in ping pinging 127.0.0.1. e.g. #> ping authserver.mojang.com PING authserver.mojang.com (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.058 ms --- authserver.mojang.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.043/0.048/0.058/0.010 ms Dig, however correctly returns the following: # dig authserver.mojang.com ; <<>> DiG 9.9.3-P2 <<>> authserver.mojang.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15800 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;authserver.mojang.com. IN A ;; ANSWER SECTION: authserver.mojang.com. 5 IN A 54.235.119.47 ;; Query time: 14 msec ;; SERVER: 2001:4860:4860::8888#53(2001:4860:4860::8888) ;; WHEN: Sat Nov 09 15:34:40 GMT 2013 ;; MSG SIZE rcvd: 66 I'm confused! My web browser returns the correct website, and the same computer booted into Windows also works correctly.

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

  • Spotlight has stopped indexing/returning anything in /Applications

    - by pra
    After a recent kernel panic & restart, Spotlight no longer seems to know anything about the files under my /Applications folder. I used to launch Safari.app, Opera.app, Textedit.app, etc via Spotlight as a matter of routine. Now, I get "No results found" for all of them (except Textedit.app, which launches a demo text editor from a Qt installation). The programs are still there & still launch directly from Finder. I've already run disk utility & verified the disk, no issues. I repaired disk permissions, which made several changes, but to no effect. Is there anything else I can do, short of re-installing MacOS? Update: I already verified that "Applications" was still checked in my Spotlight preferences. It was still returning applications located elsewhere (the Qt textedit sample app), so that shouldn't have been the issue. A few hours later it resolved itself; I guess there's a background process running on some interval.

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

  • The suggested way to handle pip(easy_install) with homebrew?

    - by Drake
    I know there are brew-gem and brew-pip but it is still really easy to get confused. Let's say my Mac OS X is 10.7.2. There are at least, as far as I know, 3 locations for Python modules (assume 2.7): /System/Library/Frameworks/Python.framework/Versions/2.7/ /Library/Python/2.7/site-packages /usr/local/lib/python2.7/site-packages/ (controlled within homebrew) For some Python modules, pip install them into 2, the so-called local/customized Python module location, and everything looks and works great. Ex, readline by *easy_install* (ipython suggested me to install readline by *easy_install* instead of pip) For some, it would try to install some miscellaneous files (ex, man, doc, ...) into system-wide location, which requires sudo! Ex, ipython insisted on installing man and doc into /System/Library/Frameworks/Python.framework/Versions/2.7/share/, which violates permission issue and all I can do is to use sudo. For some Python modules installed by brew, they are symbolic linked to /usr/local/lib/python2.7/site-packages/. Everything seems great except that you have to remember to add this location into PYTHONPATH. I am wondering any suggested and uniform way to handle those mass, or any explanation to make those stuff crystal clear.

    Read the article

  • Why won't Media Monkey add one particular folder of mp3's?

    - by ChrisF
    I'm using the latest and greatest version on Media Monkey (free version) and it won't find the mp3's in one particular folder in my music tree. It can see all the other files in the tree and the folder shows up when I click Add/Rescan files to the library... I have full control over the folder and all the files in the folder. The files play in Windows Media Player. The files play in Media Monkey if I right click and play from the context menu. All the tracks are at least 2 minutes long and over 5MB long and Media Monkey is set to ignore files shorter than 20KB and include all files regardless of length. There was an issue in that the that the genre of the tracks was set to "Classical" and the option that allows you to browse the classical music independently of the other music isn't enabled in the free version. It's a Gold version option only. I hadn't spotted that my other classical music was also missing from the library (I have rather a large library). Once I retagged the music with a different tag and tried to add the files again it reported that it added the tracks, but they still didn't show up in the library.

    Read the article

  • Ubuntu 8.04 wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • Can Octopussy use messages other than syslog style?

    - by Lee Lowder
    I am currently exploring different options for a centralized log server. We use both Linux (Ubuntu 10.04 / 12.04, LTS for both) and Windows, though for this specific issue only Linux is relevant. I like the interface that octopussy has and it's feature list, but I am hesitant due to a few things. One of the biggest concerns I have is that it seems to be syslog only. The end goal is to have a centralized place for our devs and admins to be able to search through the logs generated by Apache, Tomcat and 70+ web apps spread out among a cluster, for both our prod and test environments. While I did see that octopussy has support for plugins, I haven't been able to find any sort of plugin repo or in depth guides as to what can be done with them. Does anyone know if plugins can be used to allow octopussy to non-syslog messages? Specifically log4j type log messages that may include multi-line stack traces and such. Also, is there a user community for this software, such as a mailing list or forum? I've been unable to locate any so far. Thank you.

    Read the article

  • ext3: maximum recommended partition size / handling large partitions

    - by Hansi
    Hi! I would like to do an encrypted install of Ubuntu on a 2 Terabyte drive (i.e., using LUKS/DMcrypt). In order to not have to type in passwords too often, the partitioning scheme will be 50 GB for / and about 1 TB for /home (and the rest for Windows 7), just for clarity. Even though by now LVM is regarded as being stable, I don't want to bother having more room for errors by introducing unnecessary layers of complexity. For both Ubuntu partitions I want encrypted ext3 with the default blocksize of ext3 (4k?). Thoughts: When I look at most partition schemes here on this site or elsewhere, I usually see at most about 400 or 500 GB partitions (maybe I didn't see enough). There may be different reasons for this, but is reliability an issue here? Are larger ext3 partitions, like about 1 TB, harder to handle for the OS or filesystem driver or at some other level? If I make the partition too large, will it be harder to repair in case of corruptions? Are there some default settings for ext3 that I should change for 1 TB partitions? Question: What maximum partition size for ext3 do you recommend and why? Thanks!

    Read the article

  • Plesk FTP not working but SFTP and Shell is working

    - by shamittomar
    I am facing a strange problem. The FTP on my Plesk VPS is not working. Whenever I try to connect, FileZilla FTP client says: Status: Resolving address of xxxxxxxxxxxxx.com Status: Connecting to xxx.xxx.xxx.xxx:21... Status: Connection established, waiting for welcome message... Error: Could not connect to server So, it's not even going to the step of asking username/password. So, it's something else. The SFTP on port 22 is working fine. Also, I can successfully do shell access and run commands. But, I NEED FTP access too on port 21. I have searched everywhere but can not find any setting to enable it. This is the Plesk version info: Parallels Plesk Panel version 9.5.2 Operating system Linux 2.6.26.8-57.fc8 CPU GenuineIntel, Intel(R) Pentium(R) 4 CPU 3.00GHz Any help is appreciated. [EDIT]: The firewall is not blocking it. I have checked it on server and there are absolutely no blocking rule. Firewall states: All incoming/outgoing connections are accepted on FTP And on client-side (my PC), I can connect to other FTP servers so this is not an issue in my PC's firewall. Moreover, I can not even connect to the FTP from online FTP clients like net2ftp.

    Read the article

  • QR Codes Printing, but Not Printing Correctly - Any Ideas?

    - by SDS
    I am mail merging some QR codes via file paths stored in Excel into a label template in MS Word 2013. I have the whole process with the Ctrl+F9 working properly, but I am stumped on this: On the 30 label sheet I am trying to print I have 6 labels that are repeating information, and stored in duplicated rows in Excel for this print job. All of the labels have 2 images on them, one is a logo and the other one is a unique QR code for that person. For the first set of 6 labels that print out, everything works perfect. However, from the 2nd time the information is printed onward all of the merged fields and logo look correct, but the QR codes are printing strangely. Basically it's the QR code as I want it, but with a copy of itself covering the top left 25% of the QR code. Print preview doesn't show this happening, only once it's printed does it come out like this. I've been trying everything I can think of to fix this and don't know what to do. So far I've: Recreated the document several times, tried duplicating the images in the source folder and giving the links in the Excel document new file paths in case the mail merge feeding from the same .jpg was an issue (even though it's not a problem with the logo) Any help or insight is greatly appreciate because this is a test run for a larger batch run that I need to get done soon :( Thank you!

    Read the article

  • Why is windows not able to create a system partition?

    - by hughes
    I'm reinstalling Windows 7 64 bit, and I encountered an issue I've never seen before. I have a legit copy of Win 64 Professional, and I've installed it probably a half dozen times on this machine in the past without a problem. Googling the error only brings me to issues with people who are upgrading to win7. The drive itself seems to not have a problem. I can mount it on other systems and I can create an NTFS partition on it on other machines. I can install Ubuntu on it without any issues. Additionally, if I try using my alternate backup hard drive, the installer gives the same error. I have run diskpart from the setup page and clean seems to report that all is well. However, I cannot get past the screen below, which says Setup was unable to create a new system partition or locate an existing system partition. This happens regardless of whether or not the disk space is already allocated. What is causing this? How do I solve or get past this?

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

< Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >