Search Results

Search found 36498 results on 1460 pages for 'linux usb drive'.

Page 540/1460 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Migrating away from LVM

    - by Kye
    I have an Ubuntu home media server setup with 4.5TB split across a few hard-drives (1x3TB, 2x1TB) and I'm using LVM2 to manage the volumes. I have recently added a 60GB SSD to my server, and I wish to use it to house the 'root' partition of my server (which is currently under the LVM group). I don't want to simply add it to the LVM volume group, because (afaik) there's no way to ensure that the SSD will be used for the root filesystem. If I just throw it at the VG, it may be used to house my media, which would defeat the purpose of having the SSD in the first place. I feel that my only solution is to somehow remove my root partition from the LVM setup and copy it across to the SSD. My boot partition is, of course, not part of the LVM group. My disk setup is as follows: 60GB SSD: EMPTY. 1TB HDD: /boot, LVM space. 1TB HDD: LVM space. 3TB HHD: LVM space. I have a few logical volumes. my root (/), a 'media' volume for my media collection, a backup one for my network backups.etc. Does anyone have any advice as to how to go about this? My end goal is to have the 60GB SSD used for my boot and root partitions, with everything else on the 3TB/1TB/1TB hard-drives.

    Read the article

  • Is it possible to either abort or interrupt and later continue a lvconvert -m1 operation?

    - by SLi
    I have run the command lvconvert -m1 rootvg/newroot /dev/sdb to convert a linear logical volume to a mirrored one. The operation has not yet finished; I interrupted the command with ctrl-c at around 10% mark, but the operation seems to be running in the background anyway. Is it possible to either 1) Abort the lvconvert operation and revert to the state before it? (This would be my preferred option) 2) To safely interrupt the operation and resume it later?

    Read the article

  • Xinetd , vncserver memory requirement

    - by JP19
    Hi, I am installing the following on a low memory system: vnc4server xinetd xterm openbox obconf I will only occasionally be logging into the vncs for some admin work. My question is: 1) Does xinetd take memory / cpu even when vncserver is not running? If so, can I "run" xinetd on demand (how)? And if no, any idea how much memory it will take when vncserver is not running? 2) Does vncserver take substantial memory when no clients are connected? 3) Do openbox/obconf take memory when vncserver is running but no client are connected? 4) Do openbox/obconf take memory when no vncserver is running? thanks JP

    Read the article

  • Adding a jar file to CLASSPATH is still not executable

    - by Simon O'Hanlon
    Perhaps I just don't understand how the whole CLASSPATH environment variable works when trying to find .jar files on your system. I thought if you specified it, you could launch .jar files with java in much the same way that you can launch executables that are on your path. I have an executable java archive (.jar file) on my system, that I stuck in /usr/local/bin/gatk/. I added this to my CLASSPATH via: export CLASSPATH=/usr/local/bin/gatk/GenomeAnalysisTK.jar I thought this would make the .jar file visible to my JVM. When I try to invoke it with java -jar GenomeAnalysisTK.jar #Error: Unable to access jarfile .gatk/GenomeAnalysisTK.jar I can invoke it setting the absolute path, e.g. java -jar /usr/local/bin/gatk/GenomeAnalysisTK.jar, however I'd rather not type the full path each time. I have read many of the linked tutorials but somehow I don't seem to be getting this right and I can't understand what I am doing wrong.

    Read the article

  • Server hard disk read speed and client download speed, is there a connection? [closed]

    - by Mywiki Witwiki
    Ok so a client's download speed is only as fast as a server's upload speed, and vice versa. Based on the answers to this post: Does upload speed depend upon download speed of the server? In other words, the data transfer rate between the two computers is only as fast as the speed of the "bottleneck". Let's pretend the two computers are in two different networks and both have 100Mbps internet connection. Ben wants a copy of a file in Mark's computer hard disk with 30Mbps read speed. Does this mean that Ben can download the file at a speed of around 30Mbps only, despite having an internet connection faster than 30Mbps?

    Read the article

  • How can I log all traffic with its exact length?

    - by Legate
    I want to process all packets with their size going through our gateway server (running Debian 4.0). My idea is to use tcpdump, but I have two questions. The command I'm currently thinking of is tcpdump -i iface -n -t -q. Is it guaranteed that tcpdump will process all packets? What happens if the CPU is working to full capacity? The format of the output lines is IP ddd.ddd.ddd.ddd.port > ddd.ddd.ddd.ddd.port: tcp 1260. What exactly is 1260? I have the suspicion that it is the payload in bytes of the packet, which would be exactly what I need, but I'm not sure. It might be the TCP Window Size. Or perhaps there is an even better way of doing this? I thought about a LOG rule in iptables, but tcpdump seems easier and I don't know whether iptables can log the packet lengths.

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Linux authentication via ADS -- allowing only specific groups in PAM

    - by Kenaniah
    I'm taking the samba / winbind / PAM route to authenticate users on our linux servers from our Active Directory domain. Everything works, but I want to limit what AD groups are allowed to authenticate. Winbind / PAM currently allows any enabled user account in the active directory, and pam_winbind.so doesn't seem to heed the require_membership_of=MYDOMAIN\\mygroup parameter. Doesn't matter if I set it in the /etc/pam.d/system-auth or /etc/security/pam_winbind.conf files. How can I force winbind to honor the require_membership_of setting? Using CentOS 5.5 with up-to-date packages. Update: turns out that PAM always allows root to pass through auth, by virtue of the fact that it's root. So as long as the account exists, root will pass auth. Any other account is subjected to the auth constraints. Update 2: require_membership_of seems to be working, except for when the requesting user has the root uid. In that case, the login succeeds regardless of the require_membership_of setting. This is not an issue for any other account. How can I configure PAM to force the require_membership_of check even when the current user is root? Current PAM config is below: auth sufficient pam_winbind.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account sufficient pam_winbind.so account sufficient pam_localuser.so account required pam_unix.so broken_shadow password ..... (excluded for brevity) session required pam_winbind.so session required pam_mkhomedir.so skel=/etc/skel umask=0077 session required pam_limits.so session required pam_unix.so require_memebership_of is currently set in the /etc/security/pam_winbind.conf file, and is working (except for the root case outlined above).

    Read the article

  • OpenSSL missing during ./configure. How to fix?

    - by P K
    I was trying to install node.js and found OpenSSL support missing during ./configure. How can I fix it? Is it a mandatory step? # ./configure Checking for gcc : ok Checking for library dl : not found Checking for openssl : not found Checking for function SSL_library_init : not found Checking for header openssl/crypto.h : not found /home/ec2-user/node-v0.6.6/wscript:374: error: Could not autodetect OpenSSL support. Make sure OpenSSL development packages are installed. Use configure --without-ssl to disable this message.

    Read the article

  • How can I configure Cyrus IMAP to submit a default realm to SASL?

    - by piwi
    I have configured Postfix to work with SASL using plain text, where the former automatically submits a default realm to the latter when requesting authentication. Assuming the domain name is example.com and the user is foo, here is how I configured it on my Debian system so far. In the postfix configuration file /etc/main.cf: smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $mydomain The SMTP configuration file /etc/postfix/smtpd.conf contains: pwcheck_method: saslauthd mech_list: PLAIN The SASL daemon is configured with the sasldb mechanism in /etc/default/saslauthd: MECHANIMS="sasldb" The SASL database file contains a single user, shown by sasldblistusers2: [email protected]: userPassword The authentication works well without having to provide a realm, as postifx does that for me. However, I cannot find out how to tell the Cyrus IMAP daemon to do the same. I created a user cyrus in my SASL database, which uses the realm of the host domain name, not example.com, for administrative purpose. I used this account to create a mailbox through cyradm for the user foo: cm user.foo IMAP is configured in /etc/imapd.conf this way: allowplaintext: yes sasl_minimum_layer: 0 sasl_pwcheck_method: saslauthd sasl_mech_list: PLAIN servername: mail.example.com If I enable cross-realm authentication (loginrealms: example.com), trying to authenticate using imtest works with these options: imtest -m login -a [email protected] localhost However, I would like to be able to authenticate without having to specify the realm, like this: imtest -m login -a foo localhost I thought that using virtdomains (setting it either to userid or on) and defaultdomain: example.com would do just that, but I cannot get to make it work. I always end up with this error: cyrus/imap[11012]: badlogin: localhost [127.0.0.1] plaintext foo SASL(-13): authentication failure: checkpass failed What I understand is that cyrus-imapd never tries to submit the realm when trying to authenticate the user foo. My question: how can I tell cyrus-imapd to send the domain name as the realm automatically? Thanks for your insights!

    Read the article

  • How do I enable TUN/TAP forwarding?

    - by rafal
    I have a program which writes packets (destination address 10.3.0.2) to the TUN/TAP interface. Network: host1|tun0----eth1(10.3.0.1)|-------------------host2|eth1(10.3.0.2)| Wireshark captures these packets from interface tun0 but they are not forwarded to interface eth1. Commands: sysctl -w net.ipv4.ip_forward=1 sysctl -p iptables -A INPUT -i tun+ -j ACCEPT iptables -A FORWARD -i tun+ -j ACCEPT iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT /etc/init.d/networking restart /etc/init.d/openvpn restart

    Read the article

  • Why mutt terminates with segmentation error?

    - by hugemeow
    I pressed $, in order to sync mailbox, but mutt just quit... in fact mutt don't quit every time i press $, it only quits sometimes, so how to know the reason why mutt quit? is this a bug in mutt'? The error message is: Sorting mailbox... Segmentation fault Can i use strace with mutt if i want to know what happens? Or are there tools which are better to find out more about the problem? right now i replied to a mail, then i press $, then segmentation fault...

    Read the article

  • send outgoing email via postfix from mail client

    - by Ey Jay
    I have installed postfix on my ubuntu that is hosted on digitalocean. What I want to do is. With my smtp server setup, I want to use it to send mail from my email client. I don't need to receive, I just need to send. I can telnet example.com 25 successfully, I received the email in my inbox, but when I tried using in a email client. smtp: example.com:25 user: smtp1user password: smtp1userpassword I get an error that says "Server doesn't respond. Try changing the port." I dont know how to proceed.

    Read the article

  • SSH works in putty but not terminal

    - by Ryan Naddy
    When I try to ssh this in a terminal: ssh [email protected] I get the following error: Connection closed by 69.163.227.82 When I use putty, I am able to connect to the server. Why is this happening, and how can I get this to work in a terminal? ssh -v [email protected] OpenSSH_6.0p1 (CentrifyDC build 5.1.0-472) (CentrifyDC build 5.1.0-472), OpenSSL 0.9.8w 23 Apr 2012 debug1: Reading configuration data /etc/centrifydc/ssh/ssh_config debug1: /etc/centrifydc/ssh/ssh_config line 52: Applying options for * debug1: Connecting to sub.domain.com [69.163.227.82] port 22. debug1: Connection established. debug1: identity file /home/ryannaddy/.ssh/id_rsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_rsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.0 debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP Connection closed by 69.163.227.82

    Read the article

  • Is there a server distro with the capability of syncing live data to multiple machines?

    - by Adam Hart
    Scenario: I have a main server that is used for pagebuilding/storing master data, and is accessed by a few clients on site. This company also has multiple branches with their own server that that connect to locally, but need to work with all the same data, and have it synchronized across all servers in real (or close) time. Is there a way/specific server OS that can sync live data across all of these servers? These servers would also need to be able to: Configure AFP, FTP, CIFS, SMB Continue to host their web server and database server in a Microsoft environment, but move the file server off to commodity hardware Just wondering if this is even possible.

    Read the article

  • "Possible SYN flooding" in log despite low number of SYN_RECV connections

    - by al4
    Recently we had an apache server which was responding very slowly due to SYN flooding. The workaround for this was to enable tcp_syncookies (net.ipv4.tcp_syncookies=1 in /etc/sysctl.conf). I posted a question about this here if you want more background. After enabling syncookies we started seeing the following message in /var/log/messages approximately every 60 seconds: [84440.731929] possible SYN flooding on port 80. Sending cookies. Vinko Vrsalovic informed me that this means the syn backlog is getting full, so I raised tcp_max_syn_backlog to 4096. At some point I also lowered tcp_synack_retries to 3 (down from the default of 5) by issuing sysctl -w net.ipv4.tcp_synack_retries=3. After doing this, the frequency seemed to drop, with the interval of the messages varying between roughly 60 and 180 seconds. Next I issued sysctl -w net.ipv4.tcp_max_syn_backlog=65536, but am still getting the message in the log. Throughout all this I've been watching the number of connections in SYN_RECV state (by running watch --interval=5 'netstat -tuna |grep "SYN_RECV"|wc -l'), and it never goes higher than about 240, much much lower than the size of the backlog. Yet I have a Red Hat server which hovers around 512 (limit on this server is the default of 1024). Are there any other tcp settings which would limit the size of the backlog or am I barking up the wrong tree? Should the number of SYN_RECV connections in netstat -tuna correlate to the size of the backlog? Update As best I can tell I'm dealing with legitimate connections here, netstat -tuna|wc -l hovers around 5000. I've been researching this today and found this post from a last.fm employee, which has been rather useful. I've also discovered that the tcp_max_syn_backlog has no effect when syncookies are enabled (as per this link) So as a next step I set the following in sysctl.conf: net.ipv4.tcp_syn_retries = 3 # default=5 net.ipv4.tcp_synack_retries = 3 # default=5 net.ipv4.tcp_max_syn_backlog = 65536 # default=1024 net.core.wmem_max = 8388608 # default=124928 net.core.rmem_max = 8388608 # default=131071 net.core.somaxconn = 512 # default = 128 net.core.optmem_max = 81920 # default = 20480 I then setup my response time test, ran sysctl -p and disabled syncookies by sysctl -w net.ipv4.tcp_syncookies=0. After doing this the number of connections in the SYN_RECV state still remained around 220-250, but connections were starting to delay again. Once I noticed these delays I re-enabled syncookies and the delays stopped. I believe what I was seeing was still an improvement from the initial state, however some requests were still delayed which is much worse than having syncookies enabled. So it looks like I'm stuck with them enabled until we can get some more servers online to cope with the load. Even then, I'm not sure I see a valid reason to disable them again as they're only sent (apparently) when the server's buffers get full. But the syn backlog doesn't appear to be full with only ~250 connections in the SYN_RECV state! Is it possible that the SYN flooding message is a red herring and it's something other than the syn_backlog that's filling up? If anyone has any other tuning options I haven't tried yet I'd be more than happy to try them out, but I'm starting to wonder if the syn_backlog setting isn't being applied properly for some reason.

    Read the article

  • Apache security for multi-user development web server.

    - by mrmartinblue
    I've been searching and reading through documents all morning and understand that I need to use some combination of chown and probably 'jailing' to securely give programmers access to directories on my centos webserver. Here's the situation: I have an apache web server that has any number of virtual sites located in /var/www/site1 /var/www/site2 etc.. I have different developers that need full access both ssh and vsFTP to only the site they are working on. What is the best way to create and maintain security in this scenario. My thought would be to create a new user for each coder, jail that user to the website directory they are allowed to work in, add their user to a group and set the webroot's owner to that group. Any thoughts? Good, bad, ugly? Thanks!

    Read the article

  • Postfix count relayed messages per user

    - by Martino Dino
    I would like to know if it's possible to count the outgoing (relayed) messages on a per user basis in postfix. I'm managing a small commercial SMTP relay and decided that it would be nice to have a detailed daily report on how much mail a single user have sent (and eventually enforce some limits) possibly in realtime. I've looked almost everywhere and started to think that writing my own milter would be the way to go... Are you aware of anything that already exists for postfix that can count and report relayed mail for authenticated users (a script, milter or whatever)?

    Read the article

  • invalid argument in bash script when port is bad

    - by user273689
    When I do this command I get an error when there is something wrong with the eth3. RESC="1234" RESD="1234" RESO="1234" RESC=$(ssh -q vmx@$1 cat /sys/class/net/$2/carrier) RESO=$(ssh -q vmx@$1 cat /sys/class/net/$2/operstate) RESD=$(ssh -q vmx@$1 cat /sys/class/net/$2/dormant) cat: /sys/class/net/eth3/carrier: Invalid argument cat: /sys/class/net/eth3/dormant: Invalid argument How can I use the invalid argument inside the RESC and RESD variable Thanks

    Read the article

  • how do you install php-devel

    - by user962449
    I keep getting dependency issues when I try to run yum install php-devel yum install --skip-broken php-devel .... --> Finished Dependency Resolution php-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-5.1.6-32.el5.i386 (base) php-cli-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-cli-5.1.6-32.el5.i386 (base) --> Running transaction check ---> Package php.i386 0:5.1.6-32.el5 set to be updated --> Processing Dependency: php = 5.1.6-32.el5 for package: php-devel ---> Package php-cli.i386 0:5.1.6-32.el5 set to be updated --> Finished Dependency Resolution php-devel-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php = 5.1.6-32.el5 is needed by package php-devel-5.1.6-32.el5.i386 (base) Packages skipped because of dependency problems: autoconf-2.59-12.noarch from base automake-1.9.6-2.3.el5.noarch from base imake-1.0.2-3.i386 from base php-5.1.6-32.el5.i386 from base php-cli-5.1.6-32.el5.i386 from base php-devel-5.1.6-32.el5.i386 from base Any ideas?

    Read the article

  • How to know how many files a process opened efficiently?

    - by Victor Lin
    I knows I can use the command lsof -p xxxx | wc -l to know the count of opened files op a process, it works, but however, it is just too inefficient. I have some server process which have too many socket files, the wc -l method never return the result. So, what is the efficient way to know how many files opened on a process? Thanks.

    Read the article

  • FreeRADIUS Default Answer

    - by jinanwow
    We are using FreeRADIUS with a MySQL database, authenticating users. We ran into an issue where are MySQL database was slow causing the max number of threads to be reached. The issue with this is, when the server couldn't answer the requests as there were no threads avaiable, it sent the response of Access-Reject to the clients. Our devices cache client connections and periodically checks with the server to see if they should still be allowed or to remove them. The equipment is designed that if there is no response from the server and a client is connected it will remain connected. The issue is, when the radius server is at its max threads, its default answer is to send access-reject (verified via packet capture), however we would like to change the default behavior to just ignore the request (keeping the clients connected). We have fixed the MySQL database issue for now, but I would like to change the default from Access-Reject, to just ignore the client altogeather. I have done research, but not able to find an answer to the question. Thanks in Advance.

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >