Search Results

Search found 14187 results on 568 pages for 'dell mini 12'.

Page 379/568 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Why does traceroute take much longer than ping?

    - by PHP
    How to explain this? C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [64.233.189.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 7 ms <1 ms <1 ms reserve.cableplus.com.cn [218.242.223.209] 3 108 ms 135 ms 163 ms 211.154.70.10 4 * * * Request timed out. 5 2 ms * 1 ms 211.154.64.114 6 1 ms 1 ms 1 ms 211.154.72.185 7 1 ms 1 ms 1 ms 202.96.222.77 8 2 ms 1 ms 2 ms 61.152.81.145 9 1 ms 2 ms 1 ms 61.152.86.54 10 1 ms 1 ms 1 ms 202.97.33.238 11 2 ms 2 ms 2 ms 202.97.33.54 12 2 ms 1 ms 2 ms 202.97.33.5 13 33 ms 33 ms 33 ms 202.97.61.50 14 34 ms 34 ms 34 ms 202.97.62.214 15 34 ms 186 ms 37 ms 209.85.241.56 16 35 ms 35 ms 44 ms 66.249.94.34 17 34 ms 34 ms 34 ms hkg01s01-in-f104.1e100.net [64.233.189.104] Trace complete. So average time should be :1+7+108+2+1+1+2+1+1+2+2+33+34+34+35+34+34+35+34,which is a lot bigger than ping C:\Documents and Settings\Administrator>ping google.com Pinging google.com [64.233.189.104] with 32 bytes of data: Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Ping statistics for 64.233.189.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 34ms, Maximum = 34ms, Average = 34ms

    Read the article

  • Time machine disk icon on boot disk

    - by Ben Lings
    The icon for Macintosh HD (my boot disk) shows as a Time Machine disk. There is a file .com.apple.timemachine.supported in the root of the disk. If I delete the file and restart the computer, the icon goes back to a normal HD icon. However, the .com.apple.timemachine.supported file is recreated at some point on boot because when I log in again, the file has been recreated. If then reboot again, the icon goes back to being a Time Machine one. Any ideas about what is creating this file and why? More importantly - how can I get it to stop? It looks like something thinks the boot disk should be a Time Machine volume, but what? Console.app shows the following messages at approximately hourly intervals: 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Starting standard backup 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Cookie file is not readable or does not exist at path: /.<12 hex digits of MAC address for en0> 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Volume at path / does not appear to be the correct backup volume for this computer. (Cookies do not match) 19/01/2010 19:23:59 /System/Library/CoreServices/backupd[7459] Backup failed with error: 18 Other possibly relevant information: The boot HD isn't the original - the original failed so this is a SuperDuper'd clone of the original drive. I used to use the same disk for a SuperDuper clone as for Time Machine. These are the same same symptoms as this and this.

    Read the article

  • How to import certificate for Apache + LDAPS?

    - by user101956
    I am trying to get ldaps to work through Apache 2.2.17 (Windows Server 2008). If I use ldap (plain text) my configuration works great. LDAPTrustedGlobalCert CA_DER C:/wamp/certs/Trusted_Root_Certificate.cer LDAPVerifyServerCert Off <Location /> AuthLDAPBindDN "CN=corpsvcatlas,OU=Service Accounts,OU=u00958,OU=00958,DC=hca,DC=corpad,DC=net" AuthLDAPBindPassword ..removed.. AuthLDAPURL "ldaps://gc-hca.corpad.net:3269/dc=hca,dc=corpad,dc=net?sAMAccountName?sub" AuthType Basic AuthName "USE YOUR WINDOWS ACCOUNT" AuthBasicProvider ldap AuthUserFile /dev/null require valid-user </Location> I also tried the other encryption choices besides CA_DER just to be safe, with no luck. Finally, I also needed this with Apache tomcat. For tomcat I used the tomcat JRE and ran a line like this: keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias mycert -file Trusted_Root_Certificate.cer After doing the above line ldaps worked greate via tomcat. This lets me know that my certificate is a-ok. Update: Both ldap modules are turned on, since using ldap instead of ldaps works fine. When I run a git clone this is the error returned: C:\Tempgit clone http://eqb9718@localhost/git/Liferay.git Cloning into Liferay... Password: error: The requested URL returned error: 500 while accessing http://eqb9718@loca lhost/git/Liferay.git/info/refs fatal: HTTP request failed access.log has this: 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:12 -0600] "GET /git/Liferay.git/info/refs service=git-upload-pack HTTP/1.1" 500 535 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:33 -0600] "GET /git/Liferay.git/info/refs HTTP/1.1" 500 535 apache_error.log has nothing. Is there any more verbose logging I can turn on or better tests to do?

    Read the article

  • Gitlab and Nginx not loading gitlab

    - by paperids
    I have just installed gitlab and nginx on Ubuntu LTS 12.04 using this guide: http://blog.compunet.co.za/gitlab-installation-on-ubuntu-server-12-04/ I installed this on another server last night and had absolutely no problems with it (sort of a test run to see how long it would take to get going). I am not getting any errors when restarting gitlab or nginx with /etc/init.d and my error logs are empty. The only thing I know of to go on is the vhost config: upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.sock$ } server { listen localhost:80; server_name gitlab.bluringdev.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback$ try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is r$ # then the proxy pass the request to the upsteam (gitla$ location @gitlab { proxy_redirect off; # you need to change this to "https", if you set "ssl" $ proxy_set_header X-FORWARDED_PROTO http; proxy_set_header Host gitlab.bluringdev.com:80; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } If there's any other information that would be helpful, just let me know and I'll get it up asap.

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Finding cause of TCP retransmission within a LAN

    - by Surreal
    Hello denizens of Server Fault I have an irritating problem with a LAN of about 100 computers, 2 Windows domain servers, and 12 VoIP phones. Since their installation around a year ago, every week or so, we notice a VoIP phone resetting itself - occasionally in the middle of a call. Simultaneously there are often signs of temporary loss of connection on computers: freezes in explorer while accessing network shares, errors in our administration software due to loss of connection to the database server. I have been doing some Wireshark monitoring on the connection between the VoIP PBX and the rest of the network. Wireshark picks up a clump of retransmitted TCP packets at the times when we record phone restarts. The Wireshark log shows about 2 clusters of retransmissions a day ranging from 5 packets to hundreds. Those in each cluster are mainly between the PBX and some set of the VoIP phones, but not always the same set. Often retransmissions at the same time are to phones connected to the same switch, but sometimes retransmissions occur together to phones at opposite ends of the network. There are usually some coincident retransmissions in passing TCP traffic, for example between client machines and the file servers. The spikes in retransmissions and phone resets do not correlate well with when the network is heavily loaded. They seem to occur slightly more during the day, but most in the evening, when traffic should be decreasing. They occur reasonably often late at night when most computers are turned off and traffic should be lowest. Do you have any ideas that might help diagnose the cause of problems like this? One thing I have not yet tried, but should have, is updating the firmware of all the switches.

    Read the article

  • Ipsec config problem // openswan

    - by user90696
    I try to configure Ipsec on server with openswan as client. But receive error - possible, it's auth error. What I wrote wrong in config ? Thank you for answers. #1: STATE_MAIN_I2: sent MI2, expecting MR2 003 "f-net" #1: received Vendor ID payload [Cisco-Unity] 003 "f-net" #1: received Vendor ID payload [Dead Peer Detection] 003 "f-net" #1: ignoring unknown Vendor ID payload [ca917959574c7d5aed4222a9df367018] 003 "f-net" #1: received Vendor ID payload [XAUTH] 108 "f-net" #1: STATE_MAIN_I3: sent MI3, expecting MR3 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 010 "f-net" #1: STATE_MAIN_I3: retransmission; will wait 20s for response 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 003 "f-net" #1: discarding duplicate packet; already STATE_MAIN_I3 010 "f-net" #1: STATE_MAIN_I3: retransmission; will wait 40s for response 031 "f-net" #1: max number of retransmissions (2) reached STATE_MAIN_I3. Possible authentication failure: no acceptable response to our first encrypted message 000 "f-net" #1: starting keying attempt 2 of at most 3, but releasing whack other side - Cisco ASA. parameters for my connection on our Linux server : VPN Gateway 8.*.*.* (Cisco ) Phase 1 Exchange Type Main Mode Identification Type IP Address Local ID 4.*.*.* (our Linux server IP) Remote ID 8.*.*.* (VPN server IP) Authentication PSK Pre Shared Key Diffie-Hellman Key Group DH 5 (1536 bit) or DH 2 (1024 bit) Encryption Algorithm AES 256 HMAC Function SHA-1 Lifetime 86.400 seconds / no volume limit Phase 2 Security Protocol ESP Connection Mode Tunnel Encryption Algorithm AES 256 HMAC Function SHA-1 Lifetime 3600 seconds / 4.608.000 kilobytes DPD / IKE Keepalive 15 seconds PFS off Remote Network 192.168.100.0/24 Local Network 1 10.0.0.0/16 ............... Local Network 5 current openswan config : # config setup klipsdebug=all plutodebug="control parsing" protostack=netkey nat_traversal=no virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off nhelpers=0 conn f-net type=tunnel keyexchange=ike authby=secret auth=esp esp=aes256-sha1 keyingtries=3 pfs=no aggrmode=no keylife=3600s ike=aes256-sha1-modp1024 # left=4.*.*.* leftsubnet=10.0.0.0/16 leftid=4.*.*.* leftnexthop=%defaultroute right=8.*.*.* rightsubnet=192.168.100.0/24 rightid=8.*.*.* rightnexthop=%defaultroute auto=add

    Read the article

  • gzip specific files

    - by byTheDrop
    for some reason these files are not gzipping on my apache server, chrome network tab shows this. Is there a specific directive I can add to htaccess to cache these files? Compressing the following resources with gzip could reduce their transfer size by about two thirds (~680.45KB): adae8bc4c3cb52cbe22358aaced87a72.css could save ~607B css_f91fa8d73b5e7661d6dcf9e58395e533.css could save ~59.54KB jquery.min.js could save ~37.27KB drupal.js could save ~6.15KB auto_image_handling.js could save ~6.72KB lightbox.js could save ~29.38KB superfish.js could save ~2.42KB jquery.bgiframe.min.js could save ~1011B jquery.hoverIntent.minified.js could save ~1.05KB nice_menus.js could save ~581B panels.js could save ~531B jquery.pngFix.js could save ~2.98KB jquery.cycle.all.min.js could save ~20.20KB views_slideshow.js could save ~8.76KB views_slideshow.js could save ~9.02KB wanderlust_custom_videos.js could save ~598B wl_helper.js could save ~777B extlink.js could save ~2.88KB cufon-yui.js could save ~11.89KB googleanalytics.js could save ~1.48KB swfobject.js could save ~6.65KB jquery.jcarousel.min.js could save ~10.19KB jcarousel.js could save ~6.01KB Akzidenz_Grotesk_BE_Super_800.font.js could save ~14.27KB Akzidenz_Grotesk_BE_Bold_700.font.js could save ~12.96KB Akzidenz_Grotesk_BE_Cn_400.font.js could save ~13.39KB SuperCondensed_500.font.js could save ~24.40KB FuturaBold_700.font.js could save ~26.19KB Futura_500.font.js could save ~57.70KB SuperGroteskB_500.font.js could save ~23.86KB jquery.cookie.js could save ~1.25KB wanderlust.js could save ~1.69KB sliderbottom.js could save ~442B jcarousellite_1.0.1.min.js could save ~4.60KB jcarousellite_control.js could save ~224B sitesdropdown.js could save ~1.09KB widgets.js could save ~50.13KB cufon-drupal.js could save ~599B swfobject_api.js could save ~348B ga.js could save ~24.02KB all.js could save ~124.67KB tweet_button.1347008535.html could save ~38.43KB xd_arbiter.php could save ~16.80KB xd_arbiter.php could save ~16.80KB

    Read the article

  • How do I hide "Extra File" and "100%" lines from robocopy output?

    - by Danny Tuppeny
    I have a robocopy script to back up our Kiln repositories that runs nightly, which looks something like this: robocopy "$liveRepoLocation" "$cloneRepoLocation" /MIR /MT /W:3 /R:100 /LOG:"$backupLogLocation\BackupKiln.txt" /NFL /NDL /NP In the output, there are a ton of lines that contain "Extra file"s, like this: *EXTRA File 153 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fdt *EXTRA File 12 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fdx *EXTRA File 128 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.fnm *EXTRA File 363 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.frq *EXTRA File 13 E:\Kiln Backup\elasticsearch\data\elasticsearch-kiln\nodes\0\indices\kiln-2\0\index\_yxe.nrm Additionally, there are then hundreds of lines at the bottom that contain nothing but "100%"s, like this: 100% 100% 100% 100% 100% 100% 100% In addition to making the log files enormous (there are a lot of folders/files in Kiln repos), it makes it annoying to scan through the logs now and then to see if everything was working ok. How do I stop "Extra Files" appearing in the log? How do I stop these silly "100%" lines appearing in the log? I've tried every combination of switch I can think of (the current switches are listed above in the command), but neither seem to hide these!

    Read the article

  • SATA drives or chipset throwing DRDY ERR and ICRC ABRT

    - by Matt
    I have an SD-VIA-1A2S PCI card with 2 sata ports (and one ATA-133 that isn't used). Two new Western Digital Caviar Green drives (WD10EARS 1TB) throw repeated errors in kern.log (removed date/time/host info for brevity): [ 7.376475] ata2.00: exception Emask 0x12 SAct 0x0 SErr 0x1000500 action 0x6 [ 7.376480] ata2.00: BMDMA stat 0x5 [ 7.376483] ata2: SError: { UnrecovData Proto TrStaTrns } [ 7.376489] ata2.00: cmd c8/00:40:20:00:00/00:00:00:00:00/e0 tag 0 dma 32768 in [ 7.376490] res 51/84:2f:20:00:00/00:00:00:00:00/e0 Emask 0x12 (ATA bus error) [ 7.376493] ata2.00: status: { DRDY ERR } [ 7.376495] ata2.00: error: { ICRC ABRT } [ 7.376504] ata2: hard resetting link I'm using Ubuntu 9.04 - 2.6.28-18-generic, though I have tried live cds of Ubuntu 9.10, Fedora 12 and OpenSUSE 11.2 - all running various 2.6.31 kernels - and all received the same error. Based on testing these drives and this card in two other machines and combos of connecting the drives directly to the motherboard or the add-in card, I'm relatively convinced that it's the VIA chipset that is the problem. Another computer that also has an onboard VIA SATA chipset (like the add-in card) produces the same errors when the drives are directly on that motherboard. I have been able to verify that the drives are perfectly good, and I tried everything I can think of in terms of swapping cables, psu isn't overloaded, etc. The error happens on boot once or twice, after using fdisk on the drive once or twice, and constantly when attempting to sync a new mdadm raid 1 array created on the two drives. Any thoughts on where to go from here - driver/kernel wise? I'm completely open to buying a new PCI add-in card if someone can recommend one with 2 internal sata ports that works well in Debian/Ubuntu. Thanks!

    Read the article

  • How can we configure the Bitnami Joomla stack to open a socket on startup?

    - by bobo
    I have deployed the Bitnami Ubuntu Joomla! 3.1.5-2 (64-bit) stack on Amazon Cloud: http://bitnami.com/stack/joomla/cloud/amazon By default, the stack is configured to run PHP using PHP-FPM. I have no problem getting the Joomla and phpmyadmin running as virtual hosts on Apache. But now, I would like to add another virtual host. The problem I am having is, I have no idea how to get the system creating a socket on startup in the following folder: bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ ls -al total 12 drwxr-xr-x 2 root root 4096 Nov 3 20:43 . drwxr-xr-x 4 root root 4096 Oct 9 15:39 .. srw-rw-rw- 1 root root 0 Nov 3 20:43 joomla.sock -rw-r--r-- 1 root root 4 Nov 3 20:43 php5-fpm.pid srw-rw-rw- 1 root root 0 Nov 3 20:43 phpmyadmin.sock srw-rw-rw- 1 root root 0 Nov 3 20:43 www.sock bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ I have the following /opt/bitnami/apps/mywebsite/conf/php-fpm/pool.conf file: [mywebsite] listen=/opt/bitnami/php/var/run/mywebsite.sock include=/opt/bitnami/php/etc/common-dynamic.conf include=/opt/bitnami/apps/mywebsite/conf/php-fpm/php-settings.conf pm=dynamic As it can be seen, listen points to the mywebsite.sock which does not currently exist. I did an experiment, by removing the .sock files in the /opt/bitnami/php/var/run folder and they would come back on reboot. So how can we configure it to open a socket for mywebsite on startup?

    Read the article

  • SSL Certifcate Request s2003 DC CA DNS Name not Avaiable.

    - by Beuy
    I am trying to submit a request for an SSL certificate on a Domain Controller in order to enable LDAP SSL, and having no end of problems. I am following the information provided at http://support.microsoft.com/default.aspx?scid=kb;en-us;321051 & http://adldap.sourceforge.net/wiki/doku.php?id=ldap_over_ssl Steps taken so far: Create Servername.inf with the following information ;----------------- request.inf ----------------- [Version] Signature="$Windows NT$ [NewRequest] Subject = "CN=servername.domain.loc" ; replace with the FQDN of the DC KeySpec = 1 KeyLength = 1024 ; Can be 1024, 2048, 4096, 8192, or 16384. ; Larger key sizes are more secure, but have ; a greater impact on performance. Exportable = TRUE MachineKeySet = TRUE SMIME = False PrivateKeyArchive = FALSE UserProtected = FALSE UseExistingKeySet = FALSE ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 RequestType = PKCS10 KeyUsage = 0xa0 [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication ;----------------------------------------------- Create Certificate request by running: certreq -new Servername.inf Servername.req Attempt to submit Certificate request to CA by running: certreq -submit -attrib "CertificateTemplate: DomainController" request.req At which point I get the following error: The DNS name is unavailable and cannot be added to the Subject Alternate Name. 0x8009480f (-2146875377) Trouble shooting steps I have taken so far 1. Modify the Domain Controller Template to supply Subject Name in Request restart Certificate Service, include SAN in Request, same error. 2. Re-installed Certificate Services / IIS / Restarted machine countless times Any help resolving the issue would be greatly appreciated.

    Read the article

  • Cannot install jdk 1.5 on Ubuntu 12.04

    - by u123
    I have installed Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic x86_64). Some info about the machine: $ grep --color "model name" /proc/cpuinfo model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz I need to install jdk5 to support an old application. I have tried: ~$ sudo apt-get install openjdk-5-jdk Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package openjdk-5-jdk I have also tried: ~$ sudo apt-get install sun-java5-jdk Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package sun-java5-jdk So its not available in the repos. I have tried to follow this guide (adding the jaunty repos): http://leonardo-pinho.blogspot.dk/2010/11/java-15-no-ubuntu-1010.html but same result. Then I have tried to download jdk-1_5_0_22-linux-i586.bin from here: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase5-419410.html#jdk-1.5.0_22-oth-JPR and do: ~$ chmod a+x jdk-1_5_0_22-linux-i586.bin ~$ sudo ./jdk-1_5_0_22-linux-i586.bin Sun Microsystems, Inc. Binary Code License Agreement yes Unpacking... Checksumming... 0 0 Extracting... ./jdk-1_5_0_22-linux-i586.bin: 424: ./jdk-1_5_0_22-linux-i586.bin: ./install.sfx.19556: not found ./jdk-1_5_0_22-linux-i586.bin: 1: cd: can't cd to jdk1.5.0_22 Any suggestions?

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • sftpd: No available certificate or key corresponds to the SSL cipher suites which are enabled?

    - by Arcturus
    Hello. I'm trying to setup vsftpd on Fedora 12. I need to require use of FTPS, and for now need to use a self-signed SSL certificate. I managed to get the vsftpd service running and to connect as my user. I can list the home directory, but as soon as I try to list another directory, download or upload a file, I get this error: No available certificate or key corresponds to the SSL cipher suites which are enabled. And the xfer log is empty. I've been Googling it for a while now, but still can't understand the problem. Here's how I installed vsftpd: su yum install vsftpd chkconfig vsftpd on service vsftpd start I tried to generate the certificate in two ways. Here's the first one: cd /etc/vsftpd openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout vsftpd.pem -out vsftpd.pem Here's the second way: cd /etc/pki/tls/certs make vsftpd.pem Here's my vsftpd configuration: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log xferlog_std_format=YES nopriv_user=ftpsecure chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd/chroot_list listen=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES # SSL settings ssl_enable=YES force_local_data_ssl=YES force_local_logins_ssl=YES rsa_cert_file=/etc/pki/tls/certs/vsftpd.pem allow_anon_ssl=NO ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO Does anyone know what the problem is and how to solve it?

    Read the article

  • How to configure Apache and Tomcat with vhosts?

    - by Umar Farooq Khawaja
    I have a server with a static, public IP address. I also have a registered domain name. For the sake of illustration, let's suppose they are IP Address: 12.34.56.78 Domain Name: example.com I have a single machine on which I am running the following: A website (over IIS7) available locally at localhost:80 A JetBrains TeamCity instance (over Tomcat) available locally at localhost:1234 A VisualSVN Server instance (over Apache) available locally at localhost:5678/svn I have set up an A record for example.com and the following CNAME records: www.example.com builds.example.com sources.example.com I would like to configure Tomcat and Apache such that: if I point my browser at builds.example.com, I end up at the JetBrains TeamCity instance and, if I point my browser at sources.example.com, I end up at the VisualSVN Server instance. I thought I could configure the Apache to vhost example.com:5678/svn to point to sources.example.com and added the following lines to the Apache httpd.conf file Listen 5678 NameVirtualHost *:5678 <VistualHost *:5678> ServerName sources.example.com DocumentRoot /svn </virtualHost> That broke the VisualSVN instance, so I had to revert that to Listen 5678 Help!

    Read the article

  • Access Control Lists in Debian Lenny

    - by arbales
    So, for my clients to who have sites hosted on my server, I create user accounts, with standard home folders inside /home. I setup an SSH jail for all the collective users, because I really am against using a separate FTP server. Then, I installed ACL and added acl to my /etc/fstab — all good. I cd into /home and chmod 700 ./*. At this point users cannot see into other users home directories (yay), but apache can't see them either (boo) . I ran setfacl u:www-data:rx ./*. I also tried individual directories. Now apache can see the sites again, but so can all the users. ACL changed the permissions of the home folders to 750. How do I setup ACL's so that Apache can see the sites hosted in user's home folders AND 2. Users can't see outside their home and into others' files. Edit: more details: Output after chmod -R 700 ./* sh-3.2# chmod 700 ./* sh-3.2# ls -l total 72 drwx------+ 24 austin austin 4096 Jul 31 06:13 austin drwx------+ 8 jeremy collective 4096 Aug 3 03:22 jeremy drwx------+ 12 josh collective 4096 Jul 26 02:40 josh drwx------+ 8 joyce collective 4096 Jun 30 06:32 joyce (Not accessible to others users OR apache) setfacl -m u:www-data:rx jeremy (Now accessible to members apache and collective — why collective, too?) sh-3.2# getfacl jeremy # file: jeremy # owner: jeremy # group: collective user::rwx user:www-data:r-x group::r-x mask::r-x other::--- Solution Ultimately what I did was: chmod 755 * setfacl -R -m g::--- * setfacl -R -m u:www-data:rx *

    Read the article

  • rpm build from src file

    - by danielrutledge
    Hi all, I'm trying to build from a *.src.rpm file on FC 12 in such a way that the files are distributed a across my system as they would with a normal binary build (in this case, *.h files end up in /usr/include). When I ran rpmbuild, the headers weren't present. Here's my rpmbuild command: [root@localhost sphirewalld]# rpm -ivv /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm ============== /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key couldn't find any keys in /var/lib/rpm/pubkeys/*.key loading keyring from rpmdb opening db environment /var/lib/rpm/Packages cdb:mpool:joinenv opening db index /var/lib/rpm/Packages rdonly mode=0x0 locked db index /var/lib/rpm/Packages opening db index /var/lib/rpm/Name rdonly mode=0x0 read h# 931 Header sanity check: OK added key gpg-pubkey-57bbccba-4a6f97af to keyring read h# 1327 Header sanity check: OK added key gpg-pubkey-7fac5991-4615767f to keyring read h# 1420 Header sanity check: OK added key gpg-pubkey-16ca1a56-4a100959 to keyring read h# 1896 Header sanity check: OK added key gpg-pubkey-a3a882c1-4a1009ef to keyring Using legacy gpg-pubkey(s) from rpmdb /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) added source package [0] found 1 source and 0 binary packages Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 InstallSourcePackage at: psm.c:232: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) gtest-1.3.0-2.20090601svn257.fc12 ========== Directories not explicitly included in package: 0 /root/rpmbuild/SOURCES/ 1 /root/rpmbuild/SPECS/ ========== warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100664 1 ( 0, 0) 478034 /root/rpmbuild/SOURCES/gtest-1.3.0.tar.bz2;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 30505 /root/rpmbuild/SOURCES/gtest-svnr257.patch;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 2732 /root/rpmbuild/SPECS/gtest.spec;4ba93ce1 unknown GZDIO: 63 reads, 511788 total bytes in 0.005930 secs closed db index /var/lib/rpm/Name closed db index /var/lib/rpm/Packages closed db environment /var/lib/rpm/Packages Thanks for your help.

    Read the article

  • high load average, high wait, dmesg raid error messages (debian nfs server)

    - by John Stumbles
    Debian 6 on HP proliant (2 CPU) with raid (2*1.5T RAID1 + 2*2T RAID1 joined RAID0 to make 3.5T) running mainly nfs & imapd (plus samba for windows share & local www for previewing web pages); with local ubuntu desktop client mounting $HOME, laptops accessing imap & odd files (e.g. videos) via nfs/smb; boxes connected 100baseT or wifi via home router/switch uname -a Linux prole 2.6.32-5-686 #1 SMP Wed Jan 11 12:29:30 UTC 2012 i686 GNU/Linux Setup has been working for months but prone to intermittently going very slow (user experience on desktop mounting $HOME from server, or laptop playing videos) and now consistently so bad I've had to delve into it to try to find what's wrong(!) Server seems OK at low load e.g. (laptop) client (with $HOME on local disk) connecting to server's imapd and nfs mounting RAID to access 1 file: top shows load ~ 0.1 or less, 0 wait but when (desktop) client mounts $HOME and starts user KDE session (all accessing server) then top shows e.g. top - 13:41:17 up 3:43, 3 users, load average: 9.29, 9.55, 8.27 Tasks: 158 total, 1 running, 157 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.4%sy, 0.0%ni, 49.0%id, 49.7%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 903856k total, 851784k used, 52072k free, 171152k buffers Swap: 0k total, 0k used, 0k free, 476896k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3935 root 20 0 2456 1088 784 R 2 0.1 0:00.02 top 1 root 20 0 2028 680 584 S 0 0.1 0:01.14 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.12 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.16 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:00.42 events/0 10 root 20 0 0 0 0 S 0 0.0 0:02.26 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 netns 14 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 15 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.02 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.02 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.08 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.00 kseriod 28 root 20 0 0 0 0 S 0 0.0 0:04.19 kondemand/0 29 root 20 0 0 0 0 S 0 0.0 0:02.93 kondemand/1 30 root 20 0 0 0 0 S 0 0.0 0:00.00 khungtaskd 31 root 20 0 0 0 0 S 0 0.0 0:00.18 kswapd0 32 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd 33 root 20 0 0 0 0 S 0 0.0 0:00.00 aio/0 34 root 20 0 0 0 0 S 0 0.0 0:00.00 aio/1 35 root 20 0 0 0 0 S 0 0.0 0:00.00 crypto/0 36 root 20 0 0 0 0 S 0 0.0 0:00.00 crypto/1 203 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 204 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd 205 root 20 0 0 0 0 S 0 0.0 0:00.00 ata/0 206 root 20 0 0 0 0 S 0 0.0 0:00.00 ata/1 207 root 20 0 0 0 0 S 0 0.0 0:00.14 ata_aux 208 root 20 0 0 0 0 S 0 0.0 0:00.01 scsi_eh_0 dmesg suggests there's a disk problem: .............. (previous episode) [13276.966004] raid1:md0: read error corrected (8 sectors at 489900360 on sdc7) [13276.966043] raid1: sdb7: redirecting sector 489898312 to another mirror [13279.569186] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13279.569211] ata4.00: irq_stat 0x40000008 [13279.569230] ata4.00: failed command: READ FPDMA QUEUED [13279.569257] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13279.569262] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13279.569306] ata4.00: status: { DRDY ERR } [13279.569321] ata4.00: error: { UNC } [13279.575362] ata4.00: configured for UDMA/133 [13279.575388] ata4: EH complete [13283.169224] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13283.169246] ata4.00: irq_stat 0x40000008 [13283.169263] ata4.00: failed command: READ FPDMA QUEUED [13283.169289] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13283.169294] res 41/40:00:07:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13283.169331] ata4.00: status: { DRDY ERR } [13283.169345] ata4.00: error: { UNC } [13283.176071] ata4.00: configured for UDMA/133 [13283.176104] ata4: EH complete [13286.224814] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13286.224837] ata4.00: irq_stat 0x40000008 [13286.224853] ata4.00: failed command: READ FPDMA QUEUED [13286.224879] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13286.224884] res 41/40:00:06:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13286.224922] ata4.00: status: { DRDY ERR } [13286.224935] ata4.00: error: { UNC } [13286.231277] ata4.00: configured for UDMA/133 [13286.231303] ata4: EH complete [13288.802623] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13288.802646] ata4.00: irq_stat 0x40000008 [13288.802662] ata4.00: failed command: READ FPDMA QUEUED [13288.802688] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13288.802693] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13288.802731] ata4.00: status: { DRDY ERR } [13288.802745] ata4.00: error: { UNC } [13288.808901] ata4.00: configured for UDMA/133 [13288.808927] ata4: EH complete [13291.380430] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13291.380453] ata4.00: irq_stat 0x40000008 [13291.380470] ata4.00: failed command: READ FPDMA QUEUED [13291.380496] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13291.380501] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13291.380577] ata4.00: status: { DRDY ERR } [13291.380594] ata4.00: error: { UNC } [13291.386517] ata4.00: configured for UDMA/133 [13291.386543] ata4: EH complete [13294.347147] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13294.347169] ata4.00: irq_stat 0x40000008 [13294.347186] ata4.00: failed command: READ FPDMA QUEUED [13294.347211] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13294.347217] res 41/40:00:06:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13294.347254] ata4.00: status: { DRDY ERR } [13294.347268] ata4.00: error: { UNC } [13294.353556] ata4.00: configured for UDMA/133 [13294.353583] sd 3:0:0:0: [sdc] Unhandled sense code [13294.353590] sd 3:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [13294.353599] sd 3:0:0:0: [sdc] Sense Key : Medium Error [current] [descriptor] [13294.353610] Descriptor sense data with sense descriptors (in hex): [13294.353616] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [13294.353635] 23 05 6a 06 [13294.353644] sd 3:0:0:0: [sdc] Add. Sense: Unrecovered read error - auto reallocate failed [13294.353657] sd 3:0:0:0: [sdc] CDB: Read(10): 28 00 23 05 6a 00 00 00 08 00 [13294.353675] end_request: I/O error, dev sdc, sector 587557382 [13294.353726] ata4: EH complete [13294.366953] raid1:md0: read error corrected (8 sectors at 489900544 on sdc7) [13294.366992] raid1: sdc7: redirecting sector 489898496 to another mirror and they're happening quite frequently, which I guess is liable to account for the performance problem(?) # dmesg | grep mirror [12433.561822] raid1: sdc7: redirecting sector 489900464 to another mirror [12449.428933] raid1: sdb7: redirecting sector 489900504 to another mirror [12464.807016] raid1: sdb7: redirecting sector 489900512 to another mirror [12480.196222] raid1: sdb7: redirecting sector 489900520 to another mirror [12495.585413] raid1: sdb7: redirecting sector 489900528 to another mirror [12510.974424] raid1: sdb7: redirecting sector 489900536 to another mirror [12526.374933] raid1: sdb7: redirecting sector 489900544 to another mirror [12542.619938] raid1: sdc7: redirecting sector 489900608 to another mirror [12559.431328] raid1: sdc7: redirecting sector 489900616 to another mirror [12576.553866] raid1: sdc7: redirecting sector 489900624 to another mirror [12592.065265] raid1: sdc7: redirecting sector 489900632 to another mirror [12607.621121] raid1: sdc7: redirecting sector 489900640 to another mirror [12623.165856] raid1: sdc7: redirecting sector 489900648 to another mirror [12638.699474] raid1: sdc7: redirecting sector 489900656 to another mirror [12655.610881] raid1: sdc7: redirecting sector 489900664 to another mirror [12672.255617] raid1: sdc7: redirecting sector 489900672 to another mirror [12672.288746] raid1: sdc7: redirecting sector 489900680 to another mirror [12672.332376] raid1: sdc7: redirecting sector 489900688 to another mirror [12672.362935] raid1: sdc7: redirecting sector 489900696 to another mirror [12674.201177] raid1: sdc7: redirecting sector 489900704 to another mirror [12698.045050] raid1: sdc7: redirecting sector 489900712 to another mirror [12698.089309] raid1: sdc7: redirecting sector 489900720 to another mirror [12698.111999] raid1: sdc7: redirecting sector 489900728 to another mirror [12698.134006] raid1: sdc7: redirecting sector 489900736 to another mirror [12719.034376] raid1: sdc7: redirecting sector 489900744 to another mirror [12734.545775] raid1: sdc7: redirecting sector 489900752 to another mirror [12734.590014] raid1: sdc7: redirecting sector 489900760 to another mirror [12734.624050] raid1: sdc7: redirecting sector 489900768 to another mirror [12734.647308] raid1: sdc7: redirecting sector 489900776 to another mirror [12734.664657] raid1: sdc7: redirecting sector 489900784 to another mirror [12734.710642] raid1: sdc7: redirecting sector 489900792 to another mirror [12734.721919] raid1: sdc7: redirecting sector 489900800 to another mirror [12734.744732] raid1: sdc7: redirecting sector 489900808 to another mirror [12734.779330] raid1: sdc7: redirecting sector 489900816 to another mirror [12782.604564] raid1: sdb7: redirecting sector 1242934216 to another mirror [12798.264153] raid1: sdc7: redirecting sector 1242935080 to another mirror [13245.832193] raid1: sdb7: redirecting sector 489898296 to another mirror [13261.376929] raid1: sdb7: redirecting sector 489898304 to another mirror [13276.966043] raid1: sdb7: redirecting sector 489898312 to another mirror [13294.366992] raid1: sdc7: redirecting sector 489898496 to another mirror although the arrays are still running on all disks - they haven't given up on any yet: # cat /proc/mdstat Personalities : [raid1] [raid0] md10 : active raid0 md0[0] md1[1] 3368770048 blocks super 1.2 512k chunks md1 : active raid1 sde2[2] sdd2[1] 1464087824 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb7[0] sdc7[2] 1904684920 blocks super 1.2 [2/2] [UU] unused devices: <none> So I think I have some idea what the problem is but I am not a linux sysadmin expert by the remotest stretch of the imagination and would really appreciate some clue checking here with my diagnosis and what do I need to do: obviously I need to source another drive for sdc. (I'm guessing I could buy a larger drive if the price is right: I'm thinking that one day I'll need to grow the size of the array and that would be one less drive to replace with a larger one) then use mdadm to fail out the existing sdc, remove it and fit the new drive fdisk the new drive with the same size partition for the array as the old one had use mdadm to add the new drive into the array that sound OK?

    Read the article

  • fsck on LVM snapshots

    - by Alpha01
    I'm trying to do some file system checks using LVM snapshots of our Logical Volumes to see if any of them have dirty file systems. The problem that I have is that our LVM only has one Volume Group with no available space. I was able to do fsck's on some of the logical volumes using a loopback file system. However my question is, is it possible to create a 200GB loopback file system, and saved it on the same partition/logical volume that I'll be taking a snapshot of? Is LVM smart enough to not take a snapshot copy of the actual snapshot? [root@server z]# vgdisplay --- Volume group --- VG Name Web2-Vol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 29 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 6 Max PV 0 Cur PV 1 Act PV 1 VG Size 544.73 GB PE Size 4.00 MB Total PE 139450 Alloc PE / Size 139450 / 544.73 GB Free PE / Size 0 / 0 VG UUID BrVwNz-h1IO-ZETA-MeIf-1yq7-fHpn-fwMTcV [root@server z]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.7G 3.6G 5.6G 40% / /dev/sda1 251M 29M 210M 12% /boot /dev/mapper/Web2--Vol-var 12G 1.1G 11G 10% /var /dev/mapper/Web2--Vol-var--spool 12G 184M 12G 2% /var/spool /dev/mapper/Web2--Vol-var--lib--mysql 30G 15G 14G 52% /var/lib/mysql /dev/mapper/Web2--Vol-usr 13G 3.3G 8.9G 27% /usr /dev/mapper/Web2--Vol-z 468G 197G 267G 43% /z /dev/mapper/Web2--Vol-tmp 3.0G 76M 2.8G 3% /tmp tmpfs 7.9G 92K 7.9G 1% /dev/shm The logical volume in question is /dev/mapper/Web2--Vol-z. I'm afraid if I created the loopback file system in /dev/mapper/Web2--Vol-z and take a snapshot of it, the disk size will be trippled in size, thus running out of disk space available.

    Read the article

  • How to scope access to a service to set of users, using OpenLDAP, and only OUs

    - by JDS
    Okay, here goes. Solving this will solve several problems for me (as I can reapply this knowledge to several extant, similar problems), but luckily I have a very specific, concise problem to describe. Enough preamble. Our hosting partner is setting up VPN access for us and is connecting it to our LDAP server. They are using Cisco VPN, the docs on setting this up are here: http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808c3c45.shtml#maintask1 Specifically, note the screenshot in (5), under "ASDM" Now, I do NOT want to provide access to all of our users. I only want to provide access to our IT group. But I do not see a configuration option for LDAP groups on that web reference for the Cisco VPN. We are using: OpenLDAP 2.4 Static groups (i.e. "Group has the following members...") Single user OU, "ou=users,dc=mycompany,dc=com" Is it possible to provide an alias of some kind in OpenLDAP that creates another OU, "itusers", say, and lets me alias the members of that OU somehow? Something like: "cn=Jeff Silverman,ou=itusers,dc=mycompany,dc=com" is an alias for "cn=Jeff Silverman,ou=users,dc=mycompany,dc=com" And is NOT a separate, unique user account. Alternatively, should I just create a separate OU and manage it separately? It is a pain, but only 12-15 users will have to be managed that way, with two separate user accounts. But I hate this option - messy, unmanageable, unscalable. You know what I mean. I am open to any options. I've searched and read all over but I can't quite find an directly analagous example. I can't possibly be the only one who's had this problem! Thanks!

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • s3cmd fails too many times

    - by alfish
    It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network: root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/ bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 20.95 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.00) WARNING: Waiting 3 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 23.96 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.01) WARNING: Waiting 6 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.71 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.05) WARNING: Waiting 9 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.86 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.25) WARNING: Waiting 12 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 15.79 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=1.25) WARNING: Waiting 15 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 12288 of 2711541519 0% in 2s 4.78 kB/s failed ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file. This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1) I appreciate if you suggest some solution or a lightweight alternative to s3cmd. Thanks

    Read the article

  • Outlook errors and blank address book?

    - by Chasester
    I have a user with a fresh install of Windows 7 64 bit. Office 2010 was installed also. Everything worked great until we installed Adobe Pro 9 and Wordperfect 12. (for outlook we are not using exchange). Then Outlook popped an error saying could not start Outlook. I then modified it to run in compatibility mode and Outlook starts. Then took compatibility mode off and outlook continues to start without difficulty. However, her address book went blank (contacts are all there). In the properties of the contact folder it is checked (and grayed out) to include this folder in the address book, and the Outlook address book is listed when I look at account properties. I tried creating a new profile to no avail. I tried creating a new profile and creating a new pst file - to no avail. I tried uninstalling office, removing the folders inside roaming and local, reinstalled office; got the same could not start Outlook error - did the compatibility mode bit and got it to start - but the Address book continues to be empty. Has anyone run across this before? I'm thinking that there must be some other preferences type folder other than those in Roaming and Local since her signature remained when I reinstalled Office.

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >