Search Results

Search found 8042 results on 322 pages for 'days'.

Page 221/322 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • Problem: Munin Graph

    - by Pablo
    I've been trying to install Munin for 15 days, I looked for information, analized logs, I even deleted and reinstalled Munin using YUM. I'm hosted at Media Temple on a VPS with CentOS. The problem is still there and It's driving me nuts. Graphics are shown as following: http://imageshack.us/photo/my-images/833/capturadepantalla201106u.png/ This is the configuration of my munin.conf file dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin [localhost] address **.**.***.*** #IP VPS This is the configuration of my munin-node.conf file log_level 4 log_file /var/log/munin/munin-node.log port 4949 pid_file /var/run/munin/munin-node.pid background 1 setseid 1 # Which port to bind to; host * user root group root setsid yes # Regexps for files to ignore ignore_file ~$ ignore_file \.bak$ ignore_file %$ ignore_file \.dpkg-(tmp|new|old|dist)$ ignore_file \.rpm(save|new)$ allow ^127\.0\.0\.1$ Thanks so much, I appreciate all the answers UPDATE munin-graph.log Jun 22 16:30:02 - Starting munin-graph Jun 22 16:30:02 - Processing domain: localhost Jun 22 16:30:02 - Graphed service : open_inodes (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailtraffic (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : apache_processes (0.12 sec * 4) Jun 22 16:30:02 - Graphed service : entropy (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailstats (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : processes (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : apache_accesses (0.27 sec * 4) Jun 22 16:30:03 - Graphed service : apache_volume (0.15 sec * 4) Jun 22 16:30:03 - Graphed service : df (0.21 sec * 4) Jun 22 16:30:03 - Graphed service : netstat (0.19 sec * 4) Jun 22 16:30:03 - Graphed service : interrupts (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : swap (0.14 sec * 4) Jun 22 16:30:04 - Graphed service : load (0.11 sec * 4) Jun 22 16:30:04 - Graphed service : sendmail_mailqueue (0.13 sec * 4) Jun 22 16:30:04 - Graphed service : cpu (0.21 sec * 4) Jun 22 16:30:04 - Graphed service : df_inode (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : open_files (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : forks (0.13 sec * 4) Jun 22 16:30:05 - Graphed service : memory (0.26 sec * 4) Jun 22 16:30:05 - Graphed service : nfs_client (0.36 sec * 4) Jun 22 16:30:05 - Graphed service : vmstat (0.10 sec * 4) Jun 22 16:30:05 - Processed node: localhost (3.45 sec) Jun 22 16:30:05 - Processed domain: localhost (3.45 sec) Jun 22 16:30:05 - Munin-graph finished (3.46 sec)

    Read the article

  • Server 2008 R2 Dns Lockup

    - by Richard Maynard
    Hi, We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed Thanks in advance. Regards,

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    Hey all, We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim

    Read the article

  • Side-By-Side Configuration Error VC90.CRT

    - by Swiss
    I keep receiving the following error when trying to run MikTeX 2.8 or Visual Studio 2008 on 64-Bit Windows Vista. It's particularly odd because both programs were working problem free until a few days ago. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail. Opening the Application log provides the following information: Activation context generation failed for "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\texworks.exe". Error in manifest or policy file "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\Microsoft.VC90.CRT.MANIFEST" on line 4. Component identity found in manifest does not match the identity of the component requested. Reference is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.4148". Definition is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.1". Please use sxstrace.exe for detailed diagnosis. It looks like the problem is with Microsoft.VC90.CRT.MANIFEST, but I am not sure why or how to fix this problem. I have tried uninstalling/reinstalling Visual Studio and MikTeX, as well as uninstalling/reinstalling Microsoft's C++ Redistributable, but nothing seems to be fixing this problem.

    Read the article

  • FFMPEG Install on EC2 - Amazon Linux

    - by Oliver Holmberg
    Hello Serverfault friends, I am about two days into attempting to install FFMPEG with dependencies on an AWS EC2 instance running the Amazon Linux AMI. I've installed FFMPEG on Ubuntu and Fedora systems with no problems in the past, and have read reportedly successful instructions on installing on Red Hat/Fedora. I have followed a number of tutorials and forum articles to do so, but have had no luck yet. As far as I can tell, the main problems are as followed: The amazon linux (Most similar to red-hat/centos) yum repositories don't have ffmpeg available. I have found instructions to update the repositories to include the required packages, but adding these repositories cause yum to fail in updating packages. (Also, I've read some cautionary tales about adding redhat/centos repositories to amazon linux that lead me to believe it may be a bad idea) (https://forums.aws.amazon.com/thread.jspa?messageID=229166) I have tried a more complicated method of downloading the source tarball, compiling, and installing, but this always fails due to missing dependencies and other errors. On to my question: Has anyone successfully installed FFMPEG on Amazon Linux? Is there a fundamental incompatibility? If anyone could share specific instructions on installing ffmpeg on amazon linux I would be greatly appreciative. Any other insights/experiences would also be appreciated. Thanks in advance, Oliver

    Read the article

  • AWStats consumes too much resource, how to disable temporarily

    - by trante
    For some days AWStats takes %10-%20 of my CPU, takes 400-550 MB RAM and works for hours. Maybe my site's traffic became larger so process time takes more time than before or some bugs in program makes this. Anyway I want to disable AWStats temporarily. Maybe I would want to activate it in future. I found that answer. But it gives commands to remove AWStats. I only want to disable it temporarily. My system is Centos 6.3, Plesk 11.5.30 Update #19. I tried to disable cron jobs. I run this # killall awstats.pl I opened # vi /etc/cron.daily/awstats file and I changed it to this: #!/bin/sh #/usr/share/awstats/awstats_updateall.pl now -awstatsprog=/var/www/cgi-bin/awstats/awstats.pl -configdir=/etc/awstats >/dev/null 2>&1 exit 0 After some time I still see that awstats is running. What should I do more to not to awstats run again ? But without removing my files. After changing " /etc/cron.daily/awstats" file awstats doesn't start in daytime. But every night in 03:15 awstats starts again. Because of Plesk auto updates are working at that time, I changed from Plesk. Don't auto update automatically. But it seems like last night at 03:15 awstats started again. Is there any way to stop awstats temporarily except this solution ? Because this solution deletes awstats configs permanently and I don't know how to revert it back in future ? Turn off all AWStats for Plesk 11+ domains #!/bin/bash for i in /var/www/vhosts/*; do echo "Turning off and deleting Stats for" echo `basename $i` /usr/local/psa/admin/bin/webstatmng --unset-configs --stat-prog=awstats --domain-name=`basename $i` /usr/local/psa/admin/bin/webstatmng --clean --stat-prog=awstats --domain-name=`basename $i` done

    Read the article

  • sftpd: No available certificate or key corresponds to the SSL cipher suites which are enabled?

    - by Arcturus
    Hello. I'm trying to setup vsftpd on Fedora 12. I need to require use of FTPS, and for now need to use a self-signed SSL certificate. I managed to get the vsftpd service running and to connect as my user. I can list the home directory, but as soon as I try to list another directory, download or upload a file, I get this error: No available certificate or key corresponds to the SSL cipher suites which are enabled. And the xfer log is empty. I've been Googling it for a while now, but still can't understand the problem. Here's how I installed vsftpd: su yum install vsftpd chkconfig vsftpd on service vsftpd start I tried to generate the certificate in two ways. Here's the first one: cd /etc/vsftpd openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout vsftpd.pem -out vsftpd.pem Here's the second way: cd /etc/pki/tls/certs make vsftpd.pem Here's my vsftpd configuration: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log xferlog_std_format=YES nopriv_user=ftpsecure chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd/chroot_list listen=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES # SSL settings ssl_enable=YES force_local_data_ssl=YES force_local_logins_ssl=YES rsa_cert_file=/etc/pki/tls/certs/vsftpd.pem allow_anon_ssl=NO ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO Does anyone know what the problem is and how to solve it?

    Read the article

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • pros and cons with server management gui tools to manage linux web servers

    - by ajsie
    i have stumbled upon these GUI tools that could help you manage your linux server through a web interface. ebox, webmin, ispconfig, zivios, ispcp, plesk, cpanel etc. i wonder what the pros and cons are with these solutions. a lot of people is saying that they are not as good as using pure command line (ssh) to manage your server. but i think thats yet another "linux are for advanced users" talk. i agree that a lot of things may only be done with the command line by editing directly in the configuration files. but i don't really want to do that every time and for everything. especially basic configurations these could manage. its like not having phpmyadmin for managing mysql. it would be a pain in the ass right? so if one wants to throw up a web server serving a php site oneself developed and wants all the usual stuff up and running (mysql, phpmyadmin, svn, webdav etc) is these tools the right way to go? and for more advanced features, one just use the terminal like old days. is this a smart way of managing a linux server? and which one would you choose? have you used any of these and could share your thoughts about them?

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • Exchange 2010: Replication Service Still Trying to Replicate Deleted Mailbox Store

    - by ThaKidd
    In advance, thank you for your opinions! I just migrated from Server/Exchange 2003 to Server 2008 SR2 running Exchange 2010. I had an extra mailbox that appeared with some system mailboxes in it. I used the EMS to move those mailboxes over and then deleted the store out of the EMC. Since then every so often I get an Error in Event Viewer. Source: MSExchangeRepl ID: 4098 Error: The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'EXCHSERVER'. Error: (nothing after this) I can confirm that the above GUID is the mailbox store of that I deleted. No other Exchange errors occur. How can I tell Exchange Replication to ignore this store? Setup, one Exchange server 2003 transitioned over to 2010. No other Exchange servers. Is there a way to fix this? Do I need to change a setting to stop replication? I plan to add a second Exchange server in the next few days so stopping replication would be a bad thing. Thanks again in advance. Jason

    Read the article

  • File transfer problems through VPN when Cisco IPS is enabled

    - by Richard West
    We have a Cisco ASA 5510 firewall with the IPS module installed. We have a customer that we must connect to via VPN to their network to exchange files via FTP. We use the Cisco VPN client (version 5.0.01.0600) on our local workstations, which are behind the firewall and subject to the IPS. The VPN client is successful in connecting to the remote site. However when we start the FTP file transfer we are able to upload only 150K to 200K of data, then everything stops. A minute later the VPN session is dropped. I think I have isolated this to an IPS issue by temporarily disabling the Service Policy on the ASA for the IPS with the following command: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 inactive After this command was issued I then established the VPN to the remote site and was successful in transferring the entire file. While still connected to the VPN and FTP session I issued the command to enable the IPS: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 The file transfer was tried again and was once again successful so I closed the FTP session and reopened it, while keeping the same VPN session open. This file transfer was also successful. This told me that nothing with the FTP programs was being filtered or causing the problem. Furthermore, we use FTP to exchange files with many sites everyday without issue. I then disconnected the original VPN session, which was established when the access-list was inactive, and reconnected the VPN session, now with the access-list active. After starting the FTP transfer the file stopped after 150K. To me this seems like the IPS is blocking, or somehow interfering with the initial VPN setup to the remote site. This only started happening last week after the latest IPS signature updates were applied (sig version 407.0). Our previous sig version was 95 days old becuase the system was not auto updating itself. Any ideas on what could be causing this problem?

    Read the article

  • RAID1 Broken Mirroring

    - by Sanoj
    I'm having a little server with Windows Small Business Server 2003. I'm using RAID1, via a HighPoint Rocket RAID 1640 RAID-card, using two harddrives. This week the server alarmed, and durig reboot I got the error-message Broken Mirroring (User Manual page 30). I had a few alternatives (see the manual), first I tried Continue, but the server restarted during boot. Next time I took Power Off, and replaced the oldest harddrive with a new one, and when I booted, I selected Rebuild. Then I selected the new harddrive to be the new one. The rebuild-procedure started and a progress bar at 0% showed up, but after a few seconds I got the message Copy Failed!, then the server booted and Windows Server started. Now it works fine. But I guess that I'm just using one harddrive now, and it's not mirrored. I haven't touched the server since then (two days ago). What should I do now? I have no experience of this situation. Anyone that have some guidance?

    Read the article

  • Easiest way to allow direct HTTPS connection in Intercept mode?

    - by Nick Lin
    I know the SSL issue has been beaten to death I'm using DNS redirect to force my clients to use my intercept proxy. As we all know, intercepting HTTPS connection is not possible unless I provide a fake certificate. What I want to achieve here is to allow all HTTPS requests connect directly to the source server, thus bypassing Squid: HTTP connection Proxy by Squid HTTPS connection Bypass Squid and connect directly I spent the past few days goolging and trying different methods but none worked so far. I read about SSL tunneling using the CONNECT method but couldn't find any more information on it. I tried a similar method in using RINETD to forward all traffic going through port 443 of my Squid back to the original IP of www.pandora.com. Unfortunately, I did not realize all other HTTPS requests are also forwarded to the IP of www.pandora.com. For example, https://www.gmail.com also takes me to https://www.pandora.com Since I'm running the Intercept mode, the forwarding needs to be dynamic and match each HTTPS domain name with proper original IP. Can this be done in Squid or iptables? Lastly, I'm directing traffic to my Squid server using DNS zone redirect. For example, a client requests www.google.com, my DNS server directs that request to my Squid IP, then my transparent Squid will proxy that request. Will this set up affect what I'm trying to achieve? I tried many methods but couldn't get it to work. Any takes on how to do this?

    Read the article

  • Open Source PDF reader for windows as an alternative to Adobe reader

    - by Tom Feiner
    With the latest javascript vulnerabilities in Adobe reader and bloat it has aquired over the years, I've been thinking of moving the network I'm in charge of to a different product for PDF reading on Windows. The ideal PDF reader should be something that is: Small in size (Adobe reader is more than 200MB these days after installation). As secure by default as possible (For example, javascript disabled by default). Nice looking and easy to use interface. Not bloated with features (I just want to read PDFs, that's it). Does not install any toolbars/unwanted add ons/spyware. Does not display any ads while viewing PDFs. Preferably Open Source. (this pretty much ensures no ads). Full Unicode support. Idealy , something like evince from gnome, will be the best option, but unfortunately that's not available on Windows. Foxit is an option, as it is small, and has a nice interface. But it still has javascript enabled by default which might lead to vulnerabilities - and it installs a toolbar , and displays ads while reading PDFs which is distracting. There is a site dedicated to Open Source PDF readers, pdfreaders.org, however, the Windows pdf readers each have their problems, mostly the interface is not as convenient (as evince, adobe or foxit). Here's a list of all PDF software from WikiPedia. There's a "Viewers" section for each OS. What Windows PDF reader would you recommend ?

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • OpenVPN via DD-WRT

    - by user140491
    I am using DD-WRT with my Buffalo G300NH. I notice in my log files: Wed Oct 10 01:08:25 2012 us=343000 Cannot open /tmp/openvpn/dh.pem for DH parameters: error:02001003:system library:fopen:No such process: error:2006D080:BIO routines:BIO_new_file:no such file I have looked at other answers regarding this error. I have tried to no avail. 755 are chmod rights to /tmp/openvpn. At this point, I can not connect outside my LAN via OpenVPN. My server config looks like this: #mode server #tls-server push "route 192.168.11.1 255.255.255.0" push "dhcp-option DNS 10.8.0.1" server 10.8.0.0 255.255.255.0 port 1194 proto udp dev tun0 ifconfig 10.8.0.1 10.8.0.2 #secret /tmp/static.key ca /tmp/openvpn/ca.crt cert /tmp/openvpn/cert.pem key /tmp/openvpn/key.pem dh /tmp/openvpn/dh.pem keepalive 10 120 comp-lzo persist-key persist-tun verb 5 management localhost 5001 Can someone, knowledgeable, of this error kindly help? i have been going on several days, trying to sort it out. I like all nighters though!!

    Read the article

  • Exchange 2007 two node cluster setup on Windows 2008 Enterprise, install error

    - by Shadow00Caster
    I am installing Microsoft Exchange 2007 x64 in a two node environment using Microsoft Windows 2008 Enterprise x64. The Failover Cluster is all setup properly and following best practices for setting up the windows clustering for use with Exchange 2007. All the validation tests pass on the cluster and all of that portion is working fine. The problem is when I go to install the first Exchange node as an Active mailbox in configuration for a two node CCR. It gets all the way through the first 3 steps (Copy Exchange Files, Management Tools, Mailbox Role) and then fails on the 4th step 'Clustered Mailbox Server' with the following error: Error: The clustered mailbox server's group 'XXXX' was not found, and should already exist. Firewalls are all disabled, DNS is all setup properly, the environment has 3 domain controllers all 2k8 ent x64, all replication works. The name I pick for the CCR cluster (XXX) does not exist in AD or in DNS. I have attempted this install from both of the two Exchange nodes and multiple times .. tried with different names. I have been banging my head against the wall for days working on this and would appreciate any feedback on the issue.

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

  • Can't install Visual Studio 2010 SP1 from an .ISO file I downloaded. Error inside

    - by Sergio
    This is the error: [Window Title] C:\Users\Sergio\Desktop\Things\Setup.exe [Content] The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. [OK] I'm running Windows 7 (64bit) Ultimate and have installed this service pack before (2 days ago) on another machine with similar specs and the same exact OS software. I've tried mounting the .ISO file to a virtual drive and installing from there and I get that error. I've tried mounting the .ISO and copy pasting the files to a local folder on my drive and then running the setup.exe application, and I get that error. I don't know how to proceed but can provide any additional information you require from me. What can I do to fix this? Edit If I right click Setup.exe and Run As Administrator, I get the following error: [Window Title] C:\Users\Sergio\Desktop\Things\Setup.exe [Content] Windows cannot find 'C:\Users\Sergio\Desktop\Things\Setup.exe'. Make sure you typed the name correctly, and then try again. [OK] I've already tried re downloading the ISO from the site, but a quick check of the bytes of the file assures me that the ISO on my drive is 100% correctly downloaded. I get the same amount of bytes in size from the downloading ISO (as Opera reports).

    Read the article

  • red5 Install on Ubuntu 10.04? Problems with libslf4j

    - by mrgordon
    I've been trying for many days to get Red5 to install on Ubuntu 10.04. I finally managed to get red5.sh to stop hanging a few seconds in but now I'm getting the following error: Setting default logging context: default Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.red5.server.Bootstrap.bootStrap(Bootstrap.java:135) at org.red5.server.Bootstrap.main(Bootstrap.java:50) Caused by: java.lang.NoSuchMethodError: org.slf4j.impl.StaticLoggerBinder.getContextSelector()Lch/qos/logback/classic/selector/ContextSelector; at org.red5.logging.Red5LoggerFactory.getLogger(Red5LoggerFactory.java:121) at org.red5.logging.Red5LoggerFactory.getLogger(Red5LoggerFactory.java:108) at org.red5.server.Launcher.launch(Launcher.java:51) ... 6 more I suspected that this had to do with slf4j not being installed or on my classpath. I installed logback and libslf4j-java from aptitude and I see related files in my red5 lib directories. For example: /usr/share/red5/lib/slf4j-api-1.6.1.jar /usr/share/red5/lib/log4j-over-slf4j-1.6.1.jar /usr/share/red5/lib/logback-classic-0.9.26.jar /usr/share/red5/lib/logback-core-0.9.26.jar /usr/share/red5/lib/jcl-over-slf4j-1.6.1.jar /usr/share/red5/lib/jul-to-slf4j-1.6.1.jar And I set my classpath to /usr/share/red5/lib/ Any ideas on where to proceed from here? There seem to be a lot of people having trouble getting 10.04 and red5 0.9 working together. I've tried red5-0.9.1.tar.gz and red5_0.9.0-RC1_all.deb. The libraries above should be all that are needed according to Red5's documentation and I got the latest version of each.

    Read the article

  • Windows 7 migration led to crashdump and hibernate problems

    - by MartyMacGyver
    Note: I'm using a Samsung 830 SSD (migrated OS from defunct PC) and other than these two (interrelated?) problems it's working fine. Surprisingly well actually. Motherboard is a ASUS P8Z77-V Deluxe. Problem 1: Crashdumps are not working. volmgr throws an event 45 "The system could not sucessfully load the crash dump driver." whenever you modify crashdump settings, or if a crashdump occurs. diskpart says that "Crashdump disk = no" which is peculiar. Problem 2: Hibernation isn't working. Again, volmgr throws the same event 45 if you try to hibernate. The screen blanks, then you're at the password prompt. No sleepage occurs. (Yes, I know I should avoid hibernation on SSDs but it's enabled and the hibernation file is definitely there so I'd like to know why it's failing). Diskpart claims "Hibernation file = no" which is again peculiar... it's plainly there and getting created by the system. The common factor appears to be volmgr and/or the crashdump "service" (if that's what it is). I'd much rather get this working than spend days reinstalling and reconfiguring the entire system, especially when it's working perfectly otherwise. Sleep works as well (as long as it's not hybrid sleep). So, what defines the flags "Crashdump disk" and "Hibernation file disk" in diskpart's output? And what might be going wrong that's breaking crashdumps in particular?

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    I have been working on SSD (solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Especially these days I was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. Read and write data from and into files located in SSD... Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >