Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 488/1328 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • LVM Extend... not sure the filesystem

    - by Dan
    I would like to extend my LVM partition. First I did lvextend -L +100G /dev/server/home Now I still have to extend the filesystem. The tutorials tell me to use resize2fs, but that only works for ext2 and ext3. I'm not even sure what filesystem I have... fdisk /dev/server/home/ doesn't work... how do I know what kind of filesystem I have on my lvm partition?

    Read the article

  • Failing SSHFS connection drags down the system

    - by skerit
    From time to time my sshfs mount fails. All programs using the mount freeze when it happens. I can't even ls anything or use nautilus. Is there a way to find out what's the cause and how to handle it? I've noticed regular SSH sessions to the server get their fair share of Write failed: broken pipe disconnects, too. If I wait long enough (and I'm talking about 20-ish minutes, here) it will auto reconnect and things start working again.

    Read the article

  • Debian or CentOS?

    - by Tres
    I am looking at using either Debian or CentOS for a production server and I've heard mixed reviews of each one. I've heard CentOS performs better under load, however I am aware that Debian has a much larger package repository. Personally, I am partial to Debian since I am less familiar with Red Hat distros, but wanted to reach out on Server Fault to see which I really should be using. Any ideas? Thanks!

    Read the article

  • mod_rewrite filename from mod_pagespeed back to normal files

    - by British Sea Turtle
    I am hoping someone can help me with this problem. I am moving to a new server and not using mod_pagespeed any more. However we have lots of external links to images on our site using the strange mod_pagespeed filenames. This is not an issue but we do not want to have lots of 404 errors. So I have lots of links like the following : http://www.domain.com/images/150x150xlink.png.pagespeed.ic.pPXw45HSQm.png http://www.domain.com/images/paris_01.gif.pagespeed.ce.vfrkuKUaj0.gif http://www.doamin.com/images/1st2.gif.pagespeed.ce.OUg38q6VbZ.gif How can I redirect them to : http://www.domain.com/images/150x150xlink.png http://www.domain.com/images/paris_01.gif http://www.doamin.com/images/1st2.gif There are thousands of files like this so I am hoping for a simple solution with mod_rewrite, I tried this but it does not work. So any help would be appreciated. RewriteCond %{REQUEST_URI} \.gif\.pagespeed\. [NC] RewriteRule ^(.*?\.gif)\..*\.gif$ $1 [NC,L]

    Read the article

  • Rpm removal does not remove delivered dirs and leaves garbage

    - by Jim
    I deliver an application via an RPM. This application delivers various directories and files. E.g. under /opt/internal/com a file structure is being copied. I was expecting that on rpm -e all the file structure delivered under /opt/internal/com will be removed. But it does not. There are directories in the file structure that are non-empty. Is this the reason? But these (non-empty) directories were created by the RPM installation. So I would expect that they would be "owned" by RPM and removed automatically. Is this wrong? Am I supposed to remove them manually?

    Read the article

  • Creating a fallback error page for nginx when root directory does not exist

    - by Ruirize
    I have set up an any-domain config on my nginx server - to reduce the amount of work needed when I open a new site/domain. This config allows me to simply create a folder in /usr/share/nginx/sites/ with the name of the domain/subdomain and then it just works.™ server { # Catch all domains starting with only "www." and boot them to non "www." domain. listen 80; server_name ~^www\.(.*)$; return 301 $scheme://$1$request_uri; } server { # Catch all domains that do not start with "www." listen 80; server_name ~^(?!www\.).+; client_max_body_size 20M; # Send all requests to the appropriate host root /usr/share/nginx/sites/$host; index index.html index.htm index.php; location / { try_files $uri $uri/ =404; } recursive_error_pages on; error_page 400 /errorpages/error.php?e=400&u=$uri&h=$host&s=$scheme; error_page 401 /errorpages/error.php?e=401&u=$uri&h=$host&s=$scheme; error_page 403 /errorpages/error.php?e=403&u=$uri&h=$host&s=$scheme; error_page 404 /errorpages/error.php?e=404&u=$uri&h=$host&s=$scheme; error_page 418 /errorpages/error.php?e=418&u=$uri&h=$host&s=$scheme; error_page 500 /errorpages/error.php?e=500&u=$uri&h=$host&s=$scheme; error_page 501 /errorpages/error.php?e=501&u=$uri&h=$host&s=$scheme; error_page 503 /errorpages/error.php?e=503&u=$uri&h=$host&s=$scheme; error_page 504 /errorpages/error.php?e=504&u=$uri&h=$host&s=$scheme; location ~ \.(php|html) { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; } } However there is one issue that I'd like to resolve, and that is when a domain that doesn't have a folder in the sites directory, nginx throws an internal 500 error page because it cannot redirect to /errorpages/error.php as it doesn't exist. How can I create a fallback error page that will catch these failed requests?

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • Table 'mysql.host' doesn't exist

    - by eriktm
    100913 10:21:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql /usr/local/mysql/libexec/mysqld: Table 'mysql.plugin' doesn't exist 100913 10:21:29 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 100913 10:21:29 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist 100913 10:21:29 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended This is the output from the log-file for mysqld I get when I try to start mysqld with the mysqld_safe command. I tried to run mysql_upgrade to correct the first error, but this command seems to require the server to be started, which is my original problem. Next, it says that the table mysql.host does not exist. I was unable to figure out what this is caused by.

    Read the article

  • monitor power and lock screen (Ubuntu Lucid)

    - by xsznix
    Hi, I'm trying to get my screen to turn off whenever I lock my screen. I know that in Power Management, there's an option to turn off the screen after a set amount of time, and I know about xset dpms force off, but the former doesn't allow me to turn off the screen from the logout menu, and the latter only turns the screen off for a short amount of time (1 minute or so. The screen just turns back on by itself). Is there a script I can modify to change what happens when "Lock screen" from the logout menu is selected, or is there a script I can add to the panel to lock the screen and then turn the monitor off (and turning it back on when I shake the mouse or something)? Thanks.

    Read the article

  • Verify that a cron job has completed

    - by skylarking
    Is there a command that can be run to verify that a users cron job has run successfully? Platform is Ubuntu 8.04 LTS. I have scripts in /home/useraccount/bin/ running crontab -l while logged in as user results in: # m h dom mon dow command @hourly /home/useraccount/bin/script_1 @hourly /home/locateruser/bin/script_2 I realize scripts could send email or write to a log with a timestamp, but wondering if there is just a way to verify it ran from the command line.

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • How to configure a static wildcard subdomain with dnsmasq.

    - by Prody
    I have a network behind a NAT with a few machines. The machines are: router - NAT, dnsmasq, forwarding - directly connected to the inet server - which runs ssh, www and some other stuff clients - which do stuff on server I also have mydomain.com. server.mydomain.com is pointing to my connection's IP (single IP), which is the router, which forwards ports to server. Server, has a httpd running, which serves different sites based on vhosts. So I have site1.server.mydomain.com, site2.. The problem is that all the traffic is going thru the router, and when I check logs I always see the router's IP for everything (so it's hard to see who is running the script with the while(1)). I would just ServerAlias site1.server.local, but most of the sites have a root URL saved somewhere on top of which other URLs are built, so I can't do that. The solution for me would be telling dnsmasq somehow to answer to *.mydomain.com with server's IP. Is this possible somehow?

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • Unified inbox shows twice on Thunderbird

    - by That Help Vampire Guy
    I'm using Thunderbird 24. If I show folders in Unified mode, my inbox folder shows up twice. If I choose the "All" folders mode, I see only one inbox. The issue started when I was using Ubuntu 12.04, but now I'm on Fedora 19. (I have migrated the folders on /home). I do remember having it not-duplicated, but then it started while still on Ubuntu. I noticed it when using the Converation plugin, but I had previously used the plugin without it happening. I have disabled the plugin and it persists. What I have tried If I close Thunderbird, rename the .thunderbird folder on my /home to something else, then it will create a new config profile, I have to set up everything again, and then it works as expected, see images below: Before resetting Unified vs All Folders After resetting Unified vs All Folders (I'm trying to avoid resetting the profile and creating a fresh new one, because the server -- MS Exchange -- doesn't support IMAP labels, so I'd lose all the tags on my messages, and I have organized it based on tags instead of folder).

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Enter response once prompt returns?

    - by mjb
    It's neither a secure idea nor one I'd recommend elsewhere, but I have a situation when occasionally it takes a while for my Ansible ad-hoc command to respond. I'd love to pipe or args or whatever is needed to push the required text into the prompt so I can walk away and know it will finish. Ex: $ ansible all -m shell -a "reboot" --ask-pass Password: blah blah blah it worked I'd love to send an argument or << or something to get the password in. Is that possible?

    Read the article

  • rsync not writing files

    - by Cyrcle
    I'm trying to setup rsync to backup a remote directory to my local drive. I cd to the directory that I want to pull the files to, then I enter: rsync -vrtW [email protected]:~/public_html I enter the password then it starts running. I get all the files listed, but none of them actually transfer. What am I missing? Thanks

    Read the article

  • Creating a link to name changing directory

    - by groove1534
    I have an Ubuntu 12.04 installed using wubi + Win7. I'm trying to create a link to "my documents" directory which located in my C drive: C:\Users\Myuser\My Documents\ Since the Ubuntu is installed in D:\, which is the "host", my C drive is accessible via /media/some_changing_hex. This hex get changed each time I restart my machine. So I need, somehow, to create a link that uses regex OR a link that somehow gets the the first (in this case - only) subdirectory in /media (something like all_subdirectories[0]). So how do I do that?

    Read the article

  • Access Derby for CDP Server

    - by Skudd
    I am working on a project that requires accessing the Derby database behind a CDP Backup Server. From what limited research I've been able to complete, I have found that it is possible to access Derby databases over TCP, but I'm at a complete loss for this. I'm looking to connect via PHP eventually, but first I need to know if this is at all possible with an out-of-the-box CDP server. Answers are, as always, appreciated. Thanks!

    Read the article

  • Can't ping Ip over bridge

    - by tmn29a
    I'm unable to ping another host over a bridge I created, I can't see the error -.- It's a remote machine running debian stable with some backports for which I want to set up DHCP on the new Subnet 172.30.xxx.xxx to be used for KVM-Guests. ifconfig : bond0 Link encap:Ethernet HWaddr e4:11:5b:d4:94:30 inet addr:10.54.2.84 Bcast:10.54.2.127 Mask:255.255.255.192 inet6 addr: fe80::e611:5bff:fed4:9430/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:34277 errors:0 dropped:0 overruns:0 frame:0 TX packets:18379 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2638709 (2.5 MiB) TX bytes:2887894 (2.7 MiB) br0 Link encap:Ethernet HWaddr f2:fc:4d:7f:15:f0 inet addr:172.30.254.66 Bcast:172.30.254.127 Mask:255.255.255.192 inet6 addr: fe80::f0fc:4dff:fe7f:15f0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:252 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:10800 (10.5 KiB) Pings : ping -I br0 172.30.xxx.65 PING 172.30.xxx.65 (172.30.xxx.65) from 172.30.xxx.66 br0: 56(84) bytes of data. --- 172.30.xxx.65 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2017ms ping -I bond0 172.30.254.65 PING 172.30.xxx.65 (172.30.xxx.65) from 10.54.2.84 bond0: 56(84) bytes of data. 64 bytes from 172.30.x.65: icmp_req=1 ttl=64 time=0.599 ms 64 bytes from 172.30.x.65: icmp_req=2 ttl=64 time=0.575 ms 64 bytes from 172.30.x.65: icmp_req=3 ttl=64 time=0.565 ms --- 172.30.x.65 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.565/0.579/0.599/0.031 ms Route : Destination Gateway Genmask Flags Metric Ref Use Iface 172.30.x.64 * 255.255.255.192 U 0 0 0 br0 10.54.x.64 * 255.255.255.192 U 0 0 0 bond0 default 10.54.x.65 0.0.0.0 UG 0 0 0 bond0 default 172.30.x.65 0.0.0.0 UG 0 0 0 br0 The Interface : cat /etc/network/interfaces auto lo br0 iface lo inet loopback # Bonding Interface auto bond0 iface bond0 inet static address 10.54.x.84 netmask 255.255.255.192 network 10.54.x.64 gateway 10.54.x.65 slaves eth0 eth1 bond_mode active-backup bond_miimon 100 bond_downdelay 200 bond_updelay 200 iface br0 inet static bridge_ports bond0 address 172.30.x.66 broadcast 172.30.x.127 netmask 255.255.x.192 gateway 172.30.x.65 bridge_maxwait 0 If you need more info please ask. Thanks for your help !

    Read the article

  • Is there an email client optimized for screen readers and accessiblity?

    - by Adolfo Fitoria
    Hi. I'm currently working on a project to help visually impaired people. We're planning to use Orca screen reader for gnome. Everything is doing great but there is a problem with email web clients the most popular ones(gmail, yahoo, hotmail) are not optimized for screen readers. Is there some kind of simple email client optimized for this? Need to be very simple and straight foward and support multiple users too.

    Read the article

  • Problem installing CanonMF5880dn

    - by Paul
    Just got a CanonMF5880dn and cannot print to it from Suse 11.1 MacBook prints w/o issue ping 192.168.1.103 no problem cups sees it as Canon MF5880/MF5840 PCL at URI socket://192.168.1.103:9100 cups test print appears to submit and complete job but no action from printer Yast also seems to install printer correctly CQue2 also seems to install printer correctly all attempts to print yield same results: Suse indicates job processed correctly and completely but no printing happens. firewall is off http://192.168.1.103 in FF gives me the printer config menus correctly What have I failed to do?

    Read the article

  • lilo.conf questions

    - by Jack
    I use lilo, and have two different kernels. One is newer and use KMS with it. What I would like to do, is to be able to set vga=xxx for only one of the kernels. Is this possible? I would also like to be able to code into lilo.conf options that I pass on the commandline, but am unsure how to do this

    Read the article

  • How to setup IP alias on bridged interface in Ubuntu

    - by Anonymouslemming
    How do I setup an IP alias on a bridge (br0) device on Ubuntu ? If I wait for br0 to come up and then do /sbin/ifconfig br0:0 192.168.10.1 netmask 255.255.255.0 then it works fine. If however I add the following to my /etc/network/interfaces file, it does not work and the network fails to start: auto br0:0 iface br0:0 inet static address 192.168.10.1 netmask 255.255.255.0 At the moment, I have a script in /etc/network/if-up.d/bridge_alias that does this as follows: #!/bin/bash if [ "${LOGICAL}" == "br0" ] && [ "${PHASE}" = "post-up" ]; then echo -n "Starting br0:0 ... " /sbin/ifconfig br0:0 192.168.10.2 netmask 255.255.255.0 echo "Done!" fi What is the right way of doing this though, just using the OS network config files ?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >