Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 639/1831 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • Postfix Relay to Office365

    - by woodsbw
    I am trying to setup a Postfix server on a Linux box to relay all mail to our Office365 (Exchange, hosted by Microsoft) mail server, but, I keep getting an error regarding the sending address: BB338140DC1: to= relay=pod51010.outlook.com[157.56.234.118]:587, delay=7.6, delays=0.01/0/2.5/5.1, dsn=5.7.1, status=bounced (host pod51010.outlook.com[157.56.234.118] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) Office 365 requires that the sending address in the MAIL FROM and From: header be the same as the address used to authenticate. I have tried everything I can think of in the config to get this working. My postconf -n: append_dot_mydomain = no biff = no config_directory = /etc/postfix debug_peer_list = 127.0.0.1 inet_interfaces = loopback-only inet_protocols = all mailbox_size_limit = 0 mydestination = xxxxx, localhost.localdomain, localhost myhostname = localhost mynetworks = 127.0.0.0/8 recipient_delimiter = + relay_domains = our.doamin relayhost = [pod51010.outlook.com]:587 sender_canonical_classes = envelope_sender sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_always_send_ehlo = yes smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_tls_loglevel = 1 smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes sender_canonical: www-data [email protected] root [email protected] www-data@localhost [email protected] root@localhost [email protected] Also, sasl_passwd is set to the correct credentials (tested them using swaks multiple times.) Authentication works, and sends the message when the from headers are correct (also tested using swaks....which works) The emails are coming from PHP, so I have also tried altering the sendmail path in php.ini to use pass the correct from address via -f So, for some reason, mail coming from www-data and root are not having the from fields rewritten to Office 365's satisfaction, and it won't send the message. Any postfix gurus out there that can help me setup this relay?

    Read the article

  • Accidentally mounted a ReiserFS drive as MBR on my windows box - how do I recover?

    - by Ryan
    I had a WD Netcenter with a 160GB drive that kept dropping off the network. I opened up the enclosure and removed the hard drive, connected to a Windows box without knowing the drive used ReiserFS.... When mounting on the Windows box, I chose "MBR" as filesystem. 70GB of data corrupted: 90% of data is word documents, excel spreadsheets, and jpg's - all mission critical. Attempted recovery on Linux box (ubuntu) using TestDisk: I could see the container, but couldn't get anything out – according to TestDisk this was because I chose "none" as filesystem. Attempted recovery using Nucleus Kernel Recovery for windows: 98% of what was recovered is incomplete and/or unusable. I need to know if a way exists to recover or rebuild original ReiserFS MBR, or what tools/techniques might give me the best results in recovering the data. Found a Windows version of TestDisk and I ran it yesterday - here are the results: TestDisk 6.14-WIP, Data Recovery Utility, May 2012 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 160 GB / 149 GiB - CHS 19457 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 519 GB / 483 GiB) Check the harddisk size: HD jumpers settings, BIOS detection... The following partitions can't be recovered: Partition Start End Size in sectors > ReiserFS 3.6 62 241 8 19458 0 18 311581568 ReiserFS 3.6 62 248 55 19458 8 2 311581568 ReiserFS 3.6 62 254 37 19458 13 47 311581568 ReiserFS 3.6 63 6 28 19458 20 38 311581568 ReiserFS 3.6 63 13 11 19458 27 21 311581568 ReiserFS 3.6 63 21 43 19458 35 53 311581568 ReiserFS 3.6 63 27 41 19458 41 51 311581568 ReiserFS 3.6 63 37 35 19458 51 45 311581568 ReiserFS 3.6 63 54 20 19458 68 30 311581568 ReiserFS 3.6 63 76 26 19458 90 36 311581568

    Read the article

  • Updating modules on VPS hosted under OpenVZ

    - by tertle
    Been trying to install OpenVPN on a VPS but come into a few problems when trying to start the openvpn server: Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Anyway so after a bit of playing around and some advice, I found that the Linux kernel and modules don't match on my server. uname -r returns 2.6.18-028stab070.14 and ls /lib/modules returns 2.6.18-028stab070.7 The server is running OpenVZ and my container uses Ubuntu 9.10. So my question is, is it possible for me to update my modules on a VPS and if so how would I do this, or is this something I'll need to try get my host to do? Thanks in advance.

    Read the article

  • [SOLVED] Single Sign On for intranet with Apache and Linux MIT Kerberos

    - by Beerdude26
    EDIT: SOLVED! See my answer below. Greetings, I am looking for a way to do a single sign on to an intranet in the following manner: A Linux user logs on via a graphical frontend (for example, GNOME). He automatically requests a TGT for his username from the MIT Kerberos KDC. Via some way or another, the Apache server (which we'll assume is on the same server as the KDC), is informed that this user has logged in. When the user accesses the intranet, he is automatically granted access to his web applications. I don't think I've seen this kind of functionality while searching the net. I know the following possibilities exist: Using an authentication module such as mod_auth_kerb, a user is presented with a login prompt to enter his username and password, which are then authenticated against the MIT Kerberos server. (I would like this to be automatic.) IIS supports integrated Windows logon via ASP.Net when the user is part of an Active Directory. (I'm looking for the Linux / Apache equivalent.) Any suggestions, criticism and ideas are highly appreciated. This is for a school project to show a proof-of-concept, so every handy piece of information is more than welcome. :)

    Read the article

  • OpenLDAP Authentication UID vs CN issues

    - by user145457
    I'm having trouble authenticating services using uid for authentication, which I thought was the standard method for authentication on the user. So basically, my users are added in ldap like this: # jsmith, Users, example.com dn: uid=jsmith,ou=Users,dc=example,dc=com uidNumber: 10003 loginShell: /bin/bash sn: Smith mail: [email protected] homeDirectory: /home/jsmith displayName: John Smith givenName: John uid: jsmith gecos: John Smith gidNumber: 10000 cn: John Smith title: System Administrator But when I try to authenticate using typical webapps or services like this: jsmith password I get: ldapsearch -x -h ldap.example.com -D "cn=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" Enter LDAP Password: ldap_bind: Invalid credentials (49) But if I use: ldapsearch -x -h ldap.example.com -D "uid=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" It works. HOWEVER...most webapps and authentication methods seem to use another method. So on a webapp I'm using, unless I specify the user as: uid=smith,ou=users,dc=example,dc=com Nothing works. In the webapp I just need users to put: jsmith in the user field. Keep in mind my ldap is using the "new" cn=config method of storing settings. So if someone has an obvious ldif I'm missing please provide. Let me know if you need further info. This is openldap on ubuntu 12.04. Thanks, Dave

    Read the article

  • How to split a text file into multiple text files

    - by Andrew
    I have a text file called entry.txt that contains the following: [ entry1 ] 1239 1240 1242 1391 1392 1394 1486 1487 1489 1600 1601 1603 1657 1658 1660 2075 2076 2078 2322 2323 2325 2740 2741 2743 3082 3083 3085 3291 3292 3294 3481 3482 3484 3633 3634 3636 3690 3691 3693 3766 3767 3769 4526 4527 4529 4583 4584 4586 4773 4774 4776 5153 5154 5156 5628 5629 5631 [ entry2 ] 1239 1240 1242 1391 1392 1394 1486 1487 1489 1600 1601 1603 1657 1658 1660 2075 2076 2078 2322 2323 2325 2740 2741 2743 3082 3083 3085 3291 3292 3294 3481 3482 3484 3690 3691 3693 3766 3767 3769 4526 4527 4529 4583 4584 4586 4773 4774 4776 5153 5154 5156 5628 5629 5631 [ entry3 ] 1239 1240 1242 1391 1392 1394 1486 1487 1489 1600 1601 1603 1657 1658 1660 2075 2076 2078 2322 2323 2325 2740 2741 2743 3082 3083 3085 3291 3292 3294 3481 3482 3484 3690 3691 3693 3766 3767 3769 4241 4242 4244 4526 4527 4529 4583 4584 4586 4773 4774 4776 5153 5154 5156 5495 5496 5498 5628 5629 5631 I would like to split it into three text files: entry1.txt, entry2.txt, entry3.txt. Their contents are as follows. entry1.txt: [ entry1 ] 1239 1240 1242 1391 1392 1394 1486 1487 1489 1600 1601 1603 1657 1658 1660 2075 2076 2078 2322 2323 2325 2740 2741 2743 3082 3083 3085 3291 3292 3294 3481 3482 3484 3633 3634 3636 3690 3691 3693 3766 3767 3769 4526 4527 4529 4583 4584 4586 4773 4774 4776 5153 5154 5156 5628 5629 5631 entry2.txt: [ entry2 ] 1239 1240 1242 1391 1392 1394 1486 1487 1489 1600 1601 1603 1657 1658 1660 2075 2076 2078 2322 2323 2325 2740 2741 2743 3082 3083 3085 3291 3292 3294 3481 3482 3484 3690 3691 3693 3766 3767 3769 4526 4527 4529 4583 4584 4586 4773 4774 4776 5153 5154 5156 5628 5629 5631 entry3.txt: [ entry3 ] 1239 1240 1242 1391 1392 1394 1486 1487 1489 1600 1601 1603 1657 1658 1660 2075 2076 2078 2322 2323 2325 2740 2741 2743 3082 3083 3085 3291 3292 3294 3481 3482 3484 3690 3691 3693 3766 3767 3769 4241 4242 4244 4526 4527 4529 4583 4584 4586 4773 4774 4776 5153 5154 5156 5495 5496 5498 5628 5629 5631 In other words, the [ character indicates a new file should begin. Is there any way I can accomplish automatic text file splitting? My eventual, actual input entry.txt actually contains 200,001 entries. Doing the text split in either Windows or Linux would be great. I do not have access to a Mac machine. Thanks!

    Read the article

  • Is it possible to install ffmpeg and x264 on a Synology Diskstation 209?

    - by Kieran Benton
    Hi, Complete linux novice here! :) I'm trying to get my brilliant DS209 NAS box to do some transcoding for me of a few AVI videos to a format suitable for my Apply iTouch - yes I could do it with another machine and Handbrake but it would be really useful to offload some of this to the NAS to do overnight. I've managed to install ipkg onto my DS209 NAS box and have played around with installing some packages (binutils, mono, bash etc). I've even managed to install ffmpeg from ipkg and put together the correct command line profile to do the encoding as a .sh file: time ffmpeg -y -i $1 -f mp4 -title $2 -vcodec libx264 -level 21 -s 426×320 -b 512k -bt 512k -bufsize 4M -maxrate 4M -g 250 -coder 0 -threads 0 -acodec libfaac -ac 2 -ab 64k $3 However running this I get a missing dependency on libx264. I've tried building this from the latest source in git, but I get errors during the make process that I just don't understand (way out of my depth). encoder/set.c: In function 'x264_sei_version_write': encoder/set.c:491: error: 'X264_VERSION' undeclared (first use in this function) encoder/set.c:491: error: (Each undeclared identifier is reported only once encoder/set.c:491: error: for each function it appears in.) make: *** [encoder/set.o] Error 1 Can anyone else try building it or give me a pointer as to what I can do to get this going? Its been a good learning experience so far! Thanks.

    Read the article

  • Have an Input/output error when connecting to a server via ssh

    - by Shehzad009
    Hello I seem to be having a problem while connecting to a Ubuntu Server while connecting via ssh. When I login, I get this error. Could not chdir to home directory /home/username: Input/output error It seems like my home folder is corrupt or something. I cannot ls in the home folder directory, and in my usename directory, I can't cd into this. As root I cannot ls in the home directory as well or in any directory in Home. I notice as well when I save in vim or quit, it get this error at the bottom of the page E138: Cannot write viminfo file /home/root/.viminfo! Any ideas? EDIT: this is what happens if I type in these commands mount proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) /dev/mapper/RAID1-lvvar on /var type xfs (rw) /dev/mapper/RAID5-lvsrv on /srv type xfs (rw) /dev/mapper/RAID5-lvhome on /home type xfs (rw) /dev/mapper/RAID1-lvtmp on /tmp type reiserfs (rw) dmesg | tail [1213273.364040] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213274.084081] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213309.364038] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213310.084041] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213345.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213346.084042] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213381.365036] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213382.084047] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213417.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213418.084063] Filesystem "dm-4": xfs_log_force: error 5 returned. fdisk -l /dev/sda Cannot open /dev/sda

    Read the article

  • How do I make XTerm not use bold?

    - by mike
    I like using XTerm, I like its default "fixed" font, and I like using terminal colors rather than having a monochromatic terminal. However, XTerm seems to insist on using a bold version of the font whenever it's displaying a bright color: I hate hate hate the bold version of the font, but I like the brightness. The man page seems to suggest that adding "XTerm.VT100.boldMode:false" to my ~/.Xresources would disable this "feature", but it doesn't seem to have any effect. I've had it in there for months, so it's not a rebooting issue. How can I force XTerm to always use the standard, non-bold version of the fixed font, even when it's displaying bright text? Edit: Some have suggested putting "XTerm*boldMode: false" in my ~/.Xresources. That didn't help either. I've confirmed that the changes have taken effect with xrdb, though: $ xrdb -query | grep boldMode XTerm*boldMode: false And if i run xprop and click an xterm, I get "WM_CLASS(STRING) = "xterm", "XTerm"" .. so i'm definitely running real xterms. BTW, this is just a plain-vanilla Ubuntu Intrepid box. If anyone else here is running the same, can you try running: echo -e '#\e[1m#' ...and let me know whether the # on the right has a black pixel in the middle like the one on the left does?

    Read the article

  • Postfix 554 <[email protected]>: Relay access denied

    - by Matt
    So i am trying to set postfix up and I am running into some problems.....here is my files vim /etc/postfix/main.cf relayhost = [smtp.gmail.com]:587 smtp_connection_cache_destinations = smtp.gmail.com smtp_sasl_auth_enable=yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_tls_security_options = noanonymous tls_random_source = dev:/dev/urandom smtp_tls_CAfile= /etc/pki/CA/cacert.pem smtp_tls_security_level = may smtp_tls_scert_verifydepth = 9 append_dot_mydomain = no readme_directory = no myhostname = maggie.deliverypath.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = maggie.deliverypath.com, localhost.deliverypath.com, , localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I also have the gmail password info vim /etc/postfix/sasl_passwd gmail-smtp.l.google.com [email protected]:somepass smtp.gmail.com [email protected]:somepass then I try to follow this article and i get this output telnet mail.demoslice.com 25 Trying 67.207.128.80... Connected to www.slicehost.com. Escape character is '^]'. 220 www.slicehost.com ESMTP Postfix (Ubuntu) HELO test.demoslice.com 250 www.slicehost.com MAIL FROM:<[email protected]> 250 Ok RCPT TO:<[email protected]> 554 <[email protected]>: Relay access denied its started service postfix start * Starting Postfix Mail Transport Agent postfix ...done. then the screen gets frozen and i cant do anything....any ideas

    Read the article

  • Errors to do with modules when using net-snmp utils

    - by bob
    I was using the net-snmp packages that come with my linux distro (version 5.3.2.2), but wanted to do some work with the latest version of net-snmp (5.7), so tried compiling and installing the new source. It seemed to work ok but now I'm getting a load of errors when use net-snmp utils (snmpget, snmpset snmpwalk etc..) for example: $ snmptranslate -On SNMPv2-MIB::system.sysDescr MIB search path: /home/me/.snmp/mibs:/usr/local/share/snmp/mibs Cannot find module (SNMPv2-SMI) At line 6 in /usr/local/share/snmp/mibs/SNMPv2-MIB.txt Cannot find module (SNMPv2-TC): At line 9 in /usr/local/share/snmp/mibs/SNMPv2-MIB.txt Cannot find module (SNMPv2-MIB): At line 9 in (none) : <a lot of similar lines> : Cannot find module (NET-SNMP-VACM-MIB): At line 9 in (none) .1.3.6.1.2.1.1.1 From this I assumed perhaps that I was missing mibs from the 'MIB search path', so I looked at the first error 'Cannot find module (SNMPv2-SMI)', however it seems to be in the right directory: $ ls /usr/local/share/snmp/mibs/*SNMPv2-SMI* /usr/local/share/snmp/mibs/SNMPv2-SMI.txt And the same result for the other in the list.. so I'm wondering if anybody knows why it might not be finding the modules even though they seem to be in the search path?

    Read the article

  • Bare-metal virtualisation for the desktop

    - by Andrew Taylor
    Hi, Does anyone have any knowledge about bare-metal virtualisation products? I'm interested in building a new desktop machine for home, I've been looking at the Intel Quad Core processors and I'd like to put 8GB of RAM in there, but, it got me thinking about making the most out of the available resources. I thought if I could get a good 64bit machine, put some bare-metal virtualisation on, then have a primary system, I'd also be able to bring up some extra virtualised systems as and when I needed. I know most of the bare metal systems are designed for the server market, but, is there anything out there that works well for a desktop. What are the caveats? I presume I won't be able to make the most out of any video cards I could buy, what about just getting a decent screen resolution, will this be a problem? I run a single 24" screen. What about DVD/CD writing, is this possible? I'd like to re-rip my CD collection, I was hoping the quad 64Bit goodness would help me out with the encoding. I currently use a Mac and couldn't go back to windows so that leaves Linux, I was thinking a primary OS of ubuntu. Does this make a difference? Thanks Andrew

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • Gray Progress Bar in Macbook Boot caused by Partition Fault

    - by Konstantin Bodnya
    Here's my problem: When I try to load my macbook it shows the gray progress bar. It takes a while to fill the whole progress and after it macbook just shuts down. I tried to boot from recovery partition and run Disk Utility to repair it. Disk Utility showed my "Macintosh HD" in a gray color and failed to repair it. I thought all my data was lost, but then I tried the following: So I booted into ubunto from live usb and it successfully mounted my macintosh hd hfs+ partition. Parted shows me the following partitions on my disk: Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 20.5kB 210MB 210MB fat32 EFI System Partition boot 2 210MB 499GB 499GB hfs+ Macintosh HD 3 499GB 500GB 650MB hfs+ Recovery HD Seems legit except for FAT32 for EFI System Partition. Since all my data is okay and backed up what should I do to recover the system. I don't really want to reinstall all the system though I believe there's a command to make it allright in linux. Thank you everyone!

    Read the article

  • Setting up Red Hat Enterprise Linux Server as a mail exchange server

    - by Syedur
    I am a Unix/Linux/Windows Server noob. So, keep that in mind before you throw your stones at my glass house. :P I have a Windows Server 2008 R2 machine that's acting as domain controller, Server A. It's also running a DNS server. I have a Red Hat Enterprise Linux Server 5.3, Server B that is intended for mail server. In order for the mail delivery to happen, I understand that I have to set an MX record on Server A and point it to Server B. Well, I did. I manually added a host name on Server A and pointed to Server B's IP address. Then I added an MX record and pointed it to the host name. That didn't do the trick. After taking the above steps, I used the "dig" command on Server B to lookup the MX record coming back from Server A and it wasn't what I was expecting. What am I doing wrong here? I have noticed that... my Windows machines that are joined to the domain (Server A) are listed under the host names. The machines that are not joined to the domain are not list. This is fine, I am not worried about this. What does concern me, do I have to join the Server B to domain in order for Server A to recognize as a valid host and forward the MX properly? If so, some simple steps on how to join Server B to the domain would also help.

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • Why do I get a DegradedArray event with mdadm

    - by azera
    Hello Just so we're clear on what's happening: I bought 4 new sata 2 drives, with the intent of using them in a raid5 all drive are fully recognised by both my bios and my linux box (gentoo) I created a raid5 array, fiddled a bit with it to understand how it works, how to monitor ect At some point, this triggered a degradedarray event, even though the array is brand new. I tried to stopping the array and recreating a new array with the same drive but the new array starts degraded too. here is what I used to create it mdadm --create -l5 -n4 /dev/md/md0-r5 /dev/sdb /dev/sdd /dev/sde /dev/sdf here are the output from my /proc/mdstat and mdadm --detail --scan **mdstat** Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid5 sdf[4] sde[2] sdd[1] sdb[0] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_] [>....................] recovery = 2.8% (41689732/1465138496) finish=890.3min speed=26645K/sec unused devices: <none> **detail** ARRAY /dev/md/md0-r5 metadata=0.90 spares=1 UUID=453e2833:81f22a74:64188b84:66721085 As such I have a couple questions: does a raid5 array always start in degraded mode at first ? why does sdf have the number 4 between bracket instead of 3, why does it see a spare disk and why is the 4th drive marked with _ instead of U ? (bad configuration ?) How can I recreate the array from scratch, do i have to format each drive on its own before recreating it ? Thanks for any help, I'm not sure about what I should do at the moment

    Read the article

  • Messed up USB stick doesn't show in blkid

    - by Felix
    I was playing around with a USB stick (booting archlinux with qemu off of it and trying to perform an installation on the same stick at the same time -- brave, I know, but I was just messing around). Now, after failing to boot and install at the same time, it seems I have sort of messed up my stick. What I think happened is that I used cfdisk to wipe everything on it and create one big partition, but formatting it then failed, so now there's a big partition with no filesystem. Just to make it clear: I'm not worried for my stick, I know I can recover it at any point. What I find intriguing is that after plugging the stick into my computer (using Ubuntu), there's no (terminal) way to find out what block device (/dev/sdx) it has associated. The only way I could determine that was with GParted: But blkid shows the following: /dev/sda1: UUID="12F695CFF695B387" LABEL="System Reserved" TYPE="ntfs" /dev/sda2: UUID="A0BAA6EABAA6BC62" TYPE="ntfs" /dev/sdb1: UUID="546aec8b-9ad6-4571-b07a-adba63e25820" TYPE="ext4" /dev/sdb2: UUID="2a8b82d8-6c6e-4053-a446-bab970d93d7c" TYPE="swap" /dev/sdb3: UUID="7cbede7d-c930-4e59-9d1b-01f2d79bd092" TYPE="ext4" No trace of /dev/sdc. My question is: if I didn't have a graphical interface (to use GParted), how would I have known which block device is my stick?

    Read the article

  • Where is my CPU usage going?

    - by Josh
    My Ubuntu 10.04 Lucid virtual machine is saying it's at 100% CPU usage... but all I'm running is Thunderbird. According to top, CPU usage should be ~25.9%... How do I interpret this conflicting output from top? top - 13:55:26 up 3:35, 4 users, load average: 3.03, 2.59, 2.48 Tasks: 178 total, 1 running, 177 sleeping, 0 stopped, 0 zombie Cpu(s): 16.0%us, 79.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 1.3%hi, 3.0%si, 0.0%st Mem: 509364k total, 479108k used, 30256k free, 3092k buffers Swap: 2096440k total, 58380k used, 2038060k free, 225116k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7708 jnet 20 0 480m 109m 17m S 18.4 22.1 21:59.14 thunderbird-bin 4615 jnet 20 0 5488 1268 1040 S 2.3 0.2 5:00.03 nx-rootless-ses 7124 jnet 20 0 56688 27m 4812 S 2.0 5.5 6:35.09 nxagent 6724 nx 20 0 9628 1400 636 S 1.6 0.3 3:26.59 sshd 30106 root 20 0 2544 1236 908 R 0.7 0.2 0:00.33 top 19 root 20 0 0 0 0 S 0.3 0.0 0:22.45 ata/0 38 root 20 0 0 0 0 S 0.3 0.0 0:05.53 scsi_eh_1 345 root 20 0 0 0 0 S 0.3 0.0 0:04.72 kjournald 1719 root 20 0 3260 1192 944 S 0.3 0.2 0:17.36 vmware-guestd 1 root 20 0 2804 1356 940 S 0.0 0.3 0:01.99 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 20 0 0 0 0 S 0.0 0.0 0:00.15 ksoftirqd/0 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 ... Specifically I'm referring to the fact that the CPU usage totals show 0% idle time: Cpu(s): 16.0%us, 79.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 1.3%hi, 3.0%si, 0.0%st Yet when adding up the percentages in the %CPU column I get 25.9%, not 100%!

    Read the article

  • Problem with PXE boot

    - by user70523
    I followed the following link for PXE boot, http://www.howtoforge.com/setting-up-a-pxe-install-server-on-ubuntu-9.10-p3 and I was able to ping the client from the server and also when I booted up the client It is getting the IP address from the server. But later,I got this error PXELinux 3.82 2009-06-09 . . . [other informations] !PXE Entry point found (we hope) at 9D3B:0109 via plan A UNDI code segment at 9D3B len 16C2 UNDI data segment at 933B len A000 Getting cached packet 01 02 03 . . . [other informations] TFTP prefix: Trying to load: pxelinux.cfg/ec5db4c0-74fe-d511-b9e7-3d9235afe5a1 Trying to load: pxelinux.cfg/01-00-17-31-b6-5e-a8 Trying to load: pxelinux.cfg/0A64491E Trying to load: pxelinux.cfg/0A64491 Trying to load: pxelinux.cfg/0A6449 Trying to load: pxelinux.cfg/0A644 Trying to load: pxelinux.cfg/0A64 Trying to load: pxelinux.cfg/0A6 Trying to load: pxelinux.cfg/0A Trying to load: pxelinux.cfg/0 Trying to load: pxelinux.cfg/default Unable to locate configuration file Boot failed: press a key to retry or wait for reset I have put all the files mentioned in the link in tftpboot. Can anyone explain what could be the problem. Thanks in advance

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • Failed loading ioncube

    - by time
    I recently upgraded a small server to Ubuntu 12.10 (from 12.04), thus upgrading PHP from 5.3 to 5.4. However, I'm getting this in root's mailbox several times a day: Subject: Cron <root@xxxxxxx> [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -ignore_readdir_race -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> Message-Id: xxxxxxxxxxxxxxxxxxxxxxxx Date: Sun, 9 Dec 2012 05:09:02 -0500 (EST) Failed loading /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so: /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so: undefined symbol: php_body_write I assume that's coming up because it's for PHP 5.3. How can I just get rid of ioncube? I have no need for it, I don't even remember installing it. That .so file doesn't exist, and I've grep'd several locations for "ioncube" and I can't seem to figure how to stop that message from flooding the mailbox.

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >