Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 488/1328 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • What does directory permission 'S' mean? (not lower case, but in upper case)

    - by Howard Guo
    I downloaded Eclipse, uncompressed it, did a few other things and all sudden I notice this interesting behaviour: ^_^ ~/Downloads > sudo chmod 0000 eclipse/ ^_^ ~/Downloads > stat eclipse/ File: 'eclipse/' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 529725 Links: 9 Access: (2000/d-----S---) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-11-22 19:54:57.752017352 +1100 Modify: 2012-09-20 18:16:26.000000000 +1000 Change: 2012-11-22 20:07:49.354016510 +1100 Birth: - ^_^ ~/Downloads > sudo chmod 0755 eclipse/ ^_^ ~/Downloads > stat eclipse/ File: 'eclipse/' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 529725 Links: 9 Access: (2755/drwxr-sr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-11-22 19:54:57.752017352 +1100 Modify: 2012-09-20 18:16:26.000000000 +1000 Change: 2012-11-22 20:08:19.042016478 +1100 Birth: - What does 'S' permission mean to a directory? And why it doesn't let me get rid of it? Thanks.

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • In Ubuntu I make changes to php.ini but nothing happens

    - by MrAn3
    Hi, Apache with php works well, but none of the changes I make in php.ini have effect, I've even delete all the contents of the file, then restart Apache, and run phpinfo() and surprisingly everything continues working well. The file I'm editing is the one that appears in the phpinfo() like "Loaded Configuration File". (/etc/php5/apache2/php.ini) P.S. I'm running Ubuntu 9.04 and PHP 5.2 Thanks in advance. More Details: I'm restarting with sudo /etc/init.d/apache2 restart, I've also tried sudo /etc/init.d/apache2 stop, and then start, at restarting I get: Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [ OK ] "which php" did not produce any results. My installation of PHP was done using Synaptic Package Manger, choosing "Mark Packages by task" and then LAMP server. I don't have any clue of what to do...

    Read the article

  • Is it reasonable to make a RAID-1 array with a ram disk and a physical disk to maximize read performance and protect data?

    - by Petr Pudlák
    In one of the answers on SO (I forgot which one) I've seen a suggestion to make a RAID-1 array composed of a RAM disk and a physical partition. By adding the physical partition with --write-mostly and enabling --write-behind the system should read everything instantly from the RAM disk but still save all data to the physical partition so that the data are preserved and the RAID array can be assembled again after reboot. Is such a setup reasonable? Will it perform any better in some scenario than having just the physical partition and perhaps tweaking the kernel to favor disk cache (swappiness and vfs_cache_pressure)?

    Read the article

  • Do you lose everything when you have a hard disk failure in a multi-hard disk LVM that does NOT use RAID?

    - by user72630
    I'm debating about using LVM for a media/file server because I would like to combine multiple physical hard disks into one volume. I do not wish to use any RAID in my LVM so my question is: If one of the multiple hard disks in my volume were to go down would I lose all my data or would I just lose the data that was stored on that individual disk? Also, if I were to just lose the data on the individual disk, would it be as simple as replacing that disk and restoring what was on it from a backup to recover? Thanks everyone.

    Read the article

  • Package temperature above threshold, cpu clock throttled

    - by drN
    I am running 64 bit Ubuntu 11.10 on an i7 with 8gigs of ram. I thought of putting this on askubuntu.com but decided that maybe the question has a much broader appeal. I have the following error message popping up when I run math simulations. CPUn: Core temperature above threshold, cpu clock throttled (total events = xxxxxxx) CPUn: Package temperature above threshold, cpu clock throttled (total events = xxxxxxx) I realize that this is a hardware warning message (machine check exception, correct me if I am wrong). How do I turn these messages off? Since it doesn't seem to have a detrimental effect of my calculations or my computer (presumably), I don't like it cluttering up my virtual console screen with hundreds of these messages.

    Read the article

  • Verify that a cron job has completed

    - by skylarking
    Is there a command that can be run to verify that a users cron job has run successfully? Platform is Ubuntu 8.04 LTS. I have scripts in /home/useraccount/bin/ running crontab -l while logged in as user results in: # m h dom mon dow command @hourly /home/useraccount/bin/script_1 @hourly /home/locateruser/bin/script_2 I realize scripts could send email or write to a log with a timestamp, but wondering if there is just a way to verify it ran from the command line.

    Read the article

  • Explanation of command to uppercase the first letter of a file

    - by hazielquake
    Hi I'm trying to learn to rename files with the command line, and after browsing around a lot of pages I finally found a command that uppercases the first letter of a file, but the problem is that I want to understand the meaning of each command. The command is: for i in *; do new=echo "$i" | sed -e 's/^./\U&/'; mv "$i" "$new";done I understand the 'for' kinda... but not the 'echo' or '`' and especially the sed command. if someone has a little patience to explain the meaning of each thing that'd be awesome! Thanks!

    Read the article

  • Munin "Available entropy" when using adress space layout randomization

    - by clawspoon
    Having just configured munin for statistics logging on my gentoo server (hardened profile), I am noticing that my "Available entropy" is consitently in the 200-300 range. This seems way to low, so I checked it manually using the command $ cat /proc/sys/kernel/random/entropy_avail 3544 Odd. Consistently very low values in Munin and practically filled up when checking manually. After thinking about the problem for a while I came to the conclusion that the problem is probably that I'm using Adress Space Layout Randomization which is using the entropy when running commands/programs. Since Munin runs a whole slew of programs all the entropy is used up, and Munin then measures how much entropy there is, resulting in the low values. Does anyone have any experience with this? How can this be avoided?

    Read the article

  • LAMP Stack Version Help -- Is there a website or version tracker source to help suggest the right versions of each part of a platform stack?

    - by Chris Adragna
    Taken singly, it's easy to research versions and compatibility. Version information is readily available on each single part of a platform stack, such as MySQL. You can find out the latest version, stable version, and sometimes even the percentage of people adopting it by version (personally, I like seeing numbers on adoption rates). However, when trying to find the best possible mix of versions, I have a harder time. For example, "if you're using MySQL 5.5, you'll need PHP version XX or higher." It gets even more difficult to mitigate when you throw higher level platforms into the mix such as Drupal, Joomla, etc. I do consider "wizard" like installers to be beneficial, such as the Bitnami installers. However, I always wonder if those solutions cater more to the least common denominator -- be all to many -- and as such, I think I'd be better to install things on my own. Such solutions do seem kind of slow to adopt new versions, slower than necessary, I suspect. Is there a website or tool that consolidates versioning data in order to help a webmaster choose which versions to deploy or which upgrades to install, in consideration of all the other parts of the stack?

    Read the article

  • Sound plays on headphones and speakers with Lenovo ThinkPad L512 + Ubuntu 10

    - by Oscar Godson
    The only thing really missing from this install is this issue with the sound. I've searched all over the forums and i found one thing where you get the model and codecs and write them to a file, however, I can't seem to find what my "model" is because none of the postings have anything about Lenovo laptops. Here is the command they all asked for: Code: cat /proc/asound/card0/codec#* | grep Codec Codec: Realtek ALC269 Codec: Intel G45 DEVIBX With that info, how do I get the model, and how do I get my speakers to stop playing when headphones are plugged in. Also, I don't have any software installed like pulse audio either, so it's not that. Thanks so much to whoever can answer this... The Ubuntu forums are nearly useless... ive never gotten a correct answer back on that site.

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • Configuring snedmail to forward mail for a specific domain to a specific mail server without using M

    - by aHunter
    I am new to sendmail and would like to configure sendmail to forward all mail for a specific email address to another internal mail server. I need it to ignore the MX records and only send it to the server I specify but am not sure which files to edit or how to configure the sendmail config. Is it sufficiant to add the server to the /etc/hosts and the /etc/mail/local-host-names files? Thanks in advance.

    Read the article

  • What are the differences between the "generic" and "server" kernel images provided by Ubuntu?

    - by dcrosta
    In particular, I'm wondering if there are any patches or config adjustments made to the disk cache size in the server edition. I'm running on a small system (256M RAM), and would like to experiment with keeping the disk cache size smaller so that there's more memory available for applications. I've found this page at Ubuntu's website, which neither answers my questions nor is about the 9.04 release.

    Read the article

  • Uploads fail with shorewall enabled

    - by JamesArmes
    I have an Ubuntu 8.04 server with shorewall 4.0.6 installed. When I try to upload files using FTP, SCP, or cURL the file upload stalls almost immediatly and eventually times out. If I turn off shorewall then the uploads work fine. I don't have any rules that specifically allow FTP and I'm not too concerned with it, but I do need to be able to upload via 22 (SCP) and 80 & 443 (cURL). This is what my rules look like: COMMENT Allow Server to respond to any web (80) and SSL (443) requests ACCEPT net $FW tcp 80 ACCEPT $FW net tcp 80 ACCEPT net $FW tcp 443 ACCEPT $FW net tcp 443 COMMENT Allow Server to respond to SNMPD (161) requests ACCEPT net $FW udp 161 COMMENT Allow Server to respond to MySQL (3306) requests (for MySQL Graphing) ACCEPT net $FW tcp 3306 COMMENT Allow Server to respond to any SSH connection attempts, and to SSH out. SSH/ACCEPT net $FW SSH/ACCEPT $FW net COMMENT Allow Server to make DNS Requests out. DNS/ACCEPT $FW net COMMENT Default "close" anything else. Ping/REJECT net $FW ACCEPT $FW net icmp #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE I expected the top four ACCEPT lines to allow inbound and outbound traffic over 80 and 443 and I expected the two SSH/ACCEPT lines to allow inbound and outbound trffic over 22, including SCP. Any help is greatly appreciated. /etc/shorewall/policy contains the following (all lines above are commented out): # # Allow all connection requests from teh firewall to the internet # $FW net ACCEPT # # Policies for traffic originating from the Internet zone (net) # Drop (ignore) all connection requests from the Internet to the firewall # net all DROP info # THE FOLLOWING POLICY MUST BE LAST # Reject all other connection requests all all REJECT info #LAST LINE -- ADD YOUR ENTRIES ABOVE THIS LINE -- DO NOT REMOVE

    Read the article

  • What is the best vfat driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • Is there good FAT driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • How do I host multiple domains on Ubuntu Server (Hardy Heron)?

    - by markle976
    I am trying to figure out the best way to host multiple domains on my Ubuntu server. I have tried multiple options, but I can't get everything to work the way I want it to. I want to be able to add domains without having to restart Apache each time. I tried using mod_vhost_alias (see below), but that maps www.domain.com and domain.com to different folders. I also need to be able to use mod_rewite to map requests for domain.com/app/* to domain.com/somescript.php current httpd.conf: UseCanonicalName Off VirtualDocumentRoot /var/www/%0 Any thoughts?

    Read the article

  • How to mount a iSCSI/SAN storage drive to a stable device name (one that can't change on re-connect)?

    - by jcalfee314
    We need stable device paths for our Twinstrata SAN drives. Many guides for setting up iSCSI connectors simply say to use a device path like /dev/sda or /dev/sdb. This is far from correct, I doubt that any setup exists that would be happy to have its device name suddenly change (from /dev/sda to /dev/sdb for example). The fix I found was to install multipath and start a multipathd on boot which then provides a stable mapping between the storage's WWID to a device path like this /dev/mapper/firebird_database. This is a method described in the CentOS/RedHat here: http://www.centos.org/docs/5/html/5.1/DM_Multipath/setup_procedure.html. This seems a little complicated though. We noticed that it is common to see UUIDs appear in fstab on new installs. So, the question is, why do we need an external program (multipathd) running to provide a stable device mount? Should there be a way to provide the WWID directly in /etc/fstab?

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • GRUB2 not detecting OS on raid partitions

    - by sleeves
    I have recently added a drive to a system and have successfully raid'ed (RAID-1) the paritions, with the exception of the boot partition. I have it ready and mirrored, but can't get GRUB2 (update-grub) to find it. System: Ubuntu 11.04 Raid Metadata: 1.2 If I run update-grub, it finds the kernel images on the /dev/sda2 partition (present root) but not the images on /dev/md127. /dev/md127 is composed of "missing" and "/dev/sdb2". fdisk on /dev/sdb confirms that sdb2 is of type fd (raid autodetect) and is also flagged bootable. I have two things I want to do. Make the boot.cfg on /dev/sdb2 have a menu option to have the root be /dev/md127 Install grub onto /dev/md127 so the actual boot.cfg from there is being used. Thanks!

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Creating a fallback error page for nginx when root directory does not exist

    - by Ruirize
    I have set up an any-domain config on my nginx server - to reduce the amount of work needed when I open a new site/domain. This config allows me to simply create a folder in /usr/share/nginx/sites/ with the name of the domain/subdomain and then it just works.™ server { # Catch all domains starting with only "www." and boot them to non "www." domain. listen 80; server_name ~^www\.(.*)$; return 301 $scheme://$1$request_uri; } server { # Catch all domains that do not start with "www." listen 80; server_name ~^(?!www\.).+; client_max_body_size 20M; # Send all requests to the appropriate host root /usr/share/nginx/sites/$host; index index.html index.htm index.php; location / { try_files $uri $uri/ =404; } recursive_error_pages on; error_page 400 /errorpages/error.php?e=400&u=$uri&h=$host&s=$scheme; error_page 401 /errorpages/error.php?e=401&u=$uri&h=$host&s=$scheme; error_page 403 /errorpages/error.php?e=403&u=$uri&h=$host&s=$scheme; error_page 404 /errorpages/error.php?e=404&u=$uri&h=$host&s=$scheme; error_page 418 /errorpages/error.php?e=418&u=$uri&h=$host&s=$scheme; error_page 500 /errorpages/error.php?e=500&u=$uri&h=$host&s=$scheme; error_page 501 /errorpages/error.php?e=501&u=$uri&h=$host&s=$scheme; error_page 503 /errorpages/error.php?e=503&u=$uri&h=$host&s=$scheme; error_page 504 /errorpages/error.php?e=504&u=$uri&h=$host&s=$scheme; location ~ \.(php|html) { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; } } However there is one issue that I'd like to resolve, and that is when a domain that doesn't have a folder in the sites directory, nginx throws an internal 500 error page because it cannot redirect to /errorpages/error.php as it doesn't exist. How can I create a fallback error page that will catch these failed requests?

    Read the article

  • forbidden access on addon domains

    - by ehmad11
    I have one domain hosted on server domain.com, there are about 20 subdomains as addon domains there. For no good reason someone has changed (chgrp) on all files in domain.com directory to domain.com user now all websites are showing 403 forbidden access error. What should i do now to resume websites. I have tried changing php handler but no luck yet :/ php5 handler is suphp and Apache suEXEC is on....

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >