Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 361/991 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

  • Give root password for maintenance

    - by Jevgeni Smirnov
    After entering shutdown now in terminal I get everything running normally and then: All processes ended withing 2 seconds...done INIT: Going single user INIT: Sending processes the TERM signal INIT: Sending processes the KILL signal Give root password for maintenance(or.... I press Ctrl+D, and it shows me login screen Debian. Shutdown through GUI works properly. UPDATE 1 It seems some process hangs. Moreover I'v managed to poweroff server through several retries. Recently i'v installed only ntp and ntpdate. Nothing more. I suppose it might be it conflicting with iptables.

    Read the article

  • Achieving/maxing out 1GBPS connectivity on a nginx server

    - by Ansell Cruz
    This page (http://www.remsys.com/nginx-on-1gbps) claims that it can max out a 1gbps line using JBOD setup and no raid. Currently I'm on a 1gbps dedicated port and I'm on raid 10 (4x2TB - the disks that comes with 100tb.com servers). Currently I'm only peaking at 500-550mbps. wa% would show something around 20% everytime. I'm looking into maxing out this 1gbps port because I have an unmetered service from them. Do you guys think that the page that I referenced would be better than my current raid setup? Do you guys have any other suggestions on how to max out the performance of this server? TYI.

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • FreeRADIUS Default Answer

    - by jinanwow
    We are using FreeRADIUS with a MySQL database, authenticating users. We ran into an issue where are MySQL database was slow causing the max number of threads to be reached. The issue with this is, when the server couldn't answer the requests as there were no threads avaiable, it sent the response of Access-Reject to the clients. Our devices cache client connections and periodically checks with the server to see if they should still be allowed or to remove them. The equipment is designed that if there is no response from the server and a client is connected it will remain connected. The issue is, when the radius server is at its max threads, its default answer is to send access-reject (verified via packet capture), however we would like to change the default behavior to just ignore the request (keeping the clients connected). We have fixed the MySQL database issue for now, but I would like to change the default from Access-Reject, to just ignore the client altogeather. I have done research, but not able to find an answer to the question. Thanks in Advance.

    Read the article

  • getaddrinfo: command not found

    - by jebbie
    I've installed a new Ubuntu 12.04 on an AWS EC2 instance and everything worked fine till now. I followed the instructions in this great tutorial: http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ Now i'm on the point "installing monit" and when i restart the service i get this error message now: monit: Cannot translate '(none)' to FQDN name -- Name or service not known I started googling and someone is writing there, that monit uses getaddrinfo in his startup-process to determine the hostname. Ok, so i thought i try out on myself what is getaddrinfo delivering, and then i got: getaddrinfo: command not found I guess, something is missing on my system. Can anyone help?

    Read the article

  • Is it possible to either abort or interrupt and later continue a lvconvert -m1 operation?

    - by SLi
    I have run the command lvconvert -m1 rootvg/newroot /dev/sdb to convert a linear logical volume to a mirrored one. The operation has not yet finished; I interrupted the command with ctrl-c at around 10% mark, but the operation seems to be running in the background anyway. Is it possible to either 1) Abort the lvconvert operation and revert to the state before it? (This would be my preferred option) 2) To safely interrupt the operation and resume it later?

    Read the article

  • how do you install php-devel

    - by user962449
    I keep getting dependency issues when I try to run yum install php-devel yum install --skip-broken php-devel .... --> Finished Dependency Resolution php-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-5.1.6-32.el5.i386 (base) php-cli-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-cli-5.1.6-32.el5.i386 (base) --> Running transaction check ---> Package php.i386 0:5.1.6-32.el5 set to be updated --> Processing Dependency: php = 5.1.6-32.el5 for package: php-devel ---> Package php-cli.i386 0:5.1.6-32.el5 set to be updated --> Finished Dependency Resolution php-devel-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php = 5.1.6-32.el5 is needed by package php-devel-5.1.6-32.el5.i386 (base) Packages skipped because of dependency problems: autoconf-2.59-12.noarch from base automake-1.9.6-2.3.el5.noarch from base imake-1.0.2-3.i386 from base php-5.1.6-32.el5.i386 from base php-cli-5.1.6-32.el5.i386 from base php-devel-5.1.6-32.el5.i386 from base Any ideas?

    Read the article

  • how to word wrap, align text like the output of man?

    - by cody
    what is the command that word wraps and justifies a text file so that the output looks like that of a man page: All of these system calls are used to wait for state changes in a child of the calling process, and obtain information about the child whose state has changed. A state change is considered to be: the child terminated; the child was stopped by a signal; or the child was resumed by a signal. In the case of a terminated child, performing a wait allows the system to release the resources associated with the child; if a wait is not performed, then the termi- nated child remains in a "zombie" state (see NOTES below). Thanks.

    Read the article

  • SSL error: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch

    - by Tiffany Walker
    ERROR: SSL error: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch STEPS: openssl genrsa -out SITE.TLD.key 2048 openssl req -new -key SITE.TLD.key -out SITE.TLD.csr (send CSR to SSL site to sign) add CERT to SITE.TLD.crt add CA to SITE.TLD.ca chained them: cat SITE.TLD.crt SITE.TLD.ca > chained.cert Any Idea what I am doing wrong? I am using LiteSpeed HTTPd

    Read the article

  • How do I enable TUN/TAP forwarding?

    - by rafal
    I have a program which writes packets (destination address 10.3.0.2) to the TUN/TAP interface. Network: host1|tun0----eth1(10.3.0.1)|-------------------host2|eth1(10.3.0.2)| Wireshark captures these packets from interface tun0 but they are not forwarded to interface eth1. Commands: sysctl -w net.ipv4.ip_forward=1 sysctl -p iptables -A INPUT -i tun+ -j ACCEPT iptables -A FORWARD -i tun+ -j ACCEPT iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT /etc/init.d/networking restart /etc/init.d/openvpn restart

    Read the article

  • ios7 loops on the "trust this computer" dialog

    - by gcb
    trying to transfer files to the work ipad via my debian7 box. When i plug it on the computer usb port, it shows the dialog about trusting this computer, and the computer shows a gnome alert about the ipad being locked and that i should unlock it and try again. i press "trust" on the ipad and try again on gnome. and it starts again. over and over. endlessly. there are dozen threads about this on apple support forums. no solution. just dozens of "me too" flags. e.g. https://discussions.apple.com/message/23082859#23082859 (44 me-too, 2k views) here is the log/messages i get Oct 23 21:17:39 dotmatrix kernel: [ 1928.517766] usb 2-1.7: USB disconnect, device number 16 Oct 23 21:17:39 dotmatrix kernel: [ 1928.715441] usb 2-1.7: new high-speed USB device number 17 using ehci_hcd Oct 23 21:17:40 dotmatrix kernel: [ 1928.811031] usb 2-1.7: New USB device found, idVendor=05ac, idProduct=12ab Oct 23 21:17:40 dotmatrix kernel: [ 1928.811036] usb 2-1.7: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Oct 23 21:17:40 dotmatrix kernel: [ 1928.811039] usb 2-1.7: Product: iPad Oct 23 21:17:40 dotmatrix kernel: [ 1928.811041] usb 2-1.7: Manufacturer: Apple Inc. Oct 23 21:17:40 dotmatrix kernel: [ 1928.811043] usb 2-1.7: SerialNumber: fec5e0f6a6fa18a936de3c53af661051d290275e Oct 23 21:17:40 dotmatrix mtp-probe: checking bus 2, device 17: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.7" Oct 23 21:17:40 dotmatrix mtp-probe: bus: 2, device: 17 was not an MTP device Oct 23 21:17:43 dotmatrix kernel: [ 1932.346505] usb 2-1.7: USB disconnect, device number 17 If i never press the trust dialog it will stay there until i remove the cable. but the logs shows that it gave up 3sec after the cable was connected.

    Read the article

  • Migrating away from LVM

    - by Kye
    I have an Ubuntu home media server setup with 4.5TB split across a few hard-drives (1x3TB, 2x1TB) and I'm using LVM2 to manage the volumes. I have recently added a 60GB SSD to my server, and I wish to use it to house the 'root' partition of my server (which is currently under the LVM group). I don't want to simply add it to the LVM volume group, because (afaik) there's no way to ensure that the SSD will be used for the root filesystem. If I just throw it at the VG, it may be used to house my media, which would defeat the purpose of having the SSD in the first place. I feel that my only solution is to somehow remove my root partition from the LVM setup and copy it across to the SSD. My boot partition is, of course, not part of the LVM group. My disk setup is as follows: 60GB SSD: EMPTY. 1TB HDD: /boot, LVM space. 1TB HDD: LVM space. 3TB HHD: LVM space. I have a few logical volumes. my root (/), a 'media' volume for my media collection, a backup one for my network backups.etc. Does anyone have any advice as to how to go about this? My end goal is to have the 60GB SSD used for my boot and root partitions, with everything else on the 3TB/1TB/1TB hard-drives.

    Read the article

  • Is there a server distro with the capability of syncing live data to multiple machines?

    - by Adam Hart
    Scenario: I have a main server that is used for pagebuilding/storing master data, and is accessed by a few clients on site. This company also has multiple branches with their own server that that connect to locally, but need to work with all the same data, and have it synchronized across all servers in real (or close) time. Is there a way/specific server OS that can sync live data across all of these servers? These servers would also need to be able to: Configure AFP, FTP, CIFS, SMB Continue to host their web server and database server in a Microsoft environment, but move the file server off to commodity hardware Just wondering if this is even possible.

    Read the article

  • How restore back up email files in qmail

    - by Maysam
    I have problem with restoring some old backup mail files in a mail server that uses qmail. The problem is, when I copy a new email file to the /cur directory, the number of emails in front of inbox increases, but when I click on the inbox, I don't see the newly copied email. I can only see the old emails. I also deleted maildirsize and courierimapuiddb files and they where automatically created again, but it didn't help and I cannot still see the email in my inbox. Is there something I am missing? How can I restore the backed up email files? Please note that when I copy the email files in /.sent-mail/cur directory, they are all displayed in my sent box, but that doesn't happen for inbox files in /cur directory.

    Read the article

  • OpenOffice.org 3 waits 25 seconds before opening

    - by Joey Adams
    I'm on Fedora 14, and OpenOffice 3.3.0 takes a long time to open (about 30 seconds, sometimes less). It isn't a CPU or disk performance issue, it's just simply a very long delay before the program opens. It appears to be a frivolous network connection timing out. According to Wireshark, it tries to look up: dulcimer.(none) which fails, after which it tries to look up: dulcimer.(none).mylitestream.com (dulcimer is my hostname, and LiteStream is my ISP) Is there a way to work around this bug in OpenOffice?

    Read the article

  • How can I log all traffic with its exact length?

    - by Legate
    I want to process all packets with their size going through our gateway server (running Debian 4.0). My idea is to use tcpdump, but I have two questions. The command I'm currently thinking of is tcpdump -i iface -n -t -q. Is it guaranteed that tcpdump will process all packets? What happens if the CPU is working to full capacity? The format of the output lines is IP ddd.ddd.ddd.ddd.port > ddd.ddd.ddd.ddd.port: tcp 1260. What exactly is 1260? I have the suspicion that it is the payload in bytes of the packet, which would be exactly what I need, but I'm not sure. It might be the TCP Window Size. Or perhaps there is an even better way of doing this? I thought about a LOG rule in iptables, but tcpdump seems easier and I don't know whether iptables can log the packet lengths.

    Read the article

  • File system concepts (df command)

    - by mkab
    I'm finding it difficult to understand some stuffs about the df command. Suppose I type df and I have the following output Filesystem 1k-blocks Used Avail Capacity Mounted on /dev/da0s1 some number some number number percentage /win /dev/da0s2 some number some number number percentage /win/home /dev/da0s3a some number some number number percentage / devfs some number some number number percentage /dev /dev/da0s3g some number some number number percentage /local /dev/da0s3h some number some number -number 102% /reste /dev/da0s3d some number some number number percentage /tmp /dev/da1s3f some number some number number percentage /usr /dev/da1s3e some number some number number percentage /var /dev/da1s1a some number some number number percentage /public Are the answers to the following questions correct? How many physical drives do I have? Ans: 2. da0s1 and da1s1 How many physical partitions on each disk? Ans: 8 for da0s1 and 1 for da1s1 How many BSD partition on each physical partition Ans: Impossible to determine. We have to use the -T to determine its type How is it possible for the file system /dev/da0s3h filled at 102%? And where is this overflowed data written?Ans: I have no idea for this one Thanks.

    Read the article

  • "Possible SYN flooding" in log despite low number of SYN_RECV connections

    - by al4
    Recently we had an apache server which was responding very slowly due to SYN flooding. The workaround for this was to enable tcp_syncookies (net.ipv4.tcp_syncookies=1 in /etc/sysctl.conf). I posted a question about this here if you want more background. After enabling syncookies we started seeing the following message in /var/log/messages approximately every 60 seconds: [84440.731929] possible SYN flooding on port 80. Sending cookies. Vinko Vrsalovic informed me that this means the syn backlog is getting full, so I raised tcp_max_syn_backlog to 4096. At some point I also lowered tcp_synack_retries to 3 (down from the default of 5) by issuing sysctl -w net.ipv4.tcp_synack_retries=3. After doing this, the frequency seemed to drop, with the interval of the messages varying between roughly 60 and 180 seconds. Next I issued sysctl -w net.ipv4.tcp_max_syn_backlog=65536, but am still getting the message in the log. Throughout all this I've been watching the number of connections in SYN_RECV state (by running watch --interval=5 'netstat -tuna |grep "SYN_RECV"|wc -l'), and it never goes higher than about 240, much much lower than the size of the backlog. Yet I have a Red Hat server which hovers around 512 (limit on this server is the default of 1024). Are there any other tcp settings which would limit the size of the backlog or am I barking up the wrong tree? Should the number of SYN_RECV connections in netstat -tuna correlate to the size of the backlog? Update As best I can tell I'm dealing with legitimate connections here, netstat -tuna|wc -l hovers around 5000. I've been researching this today and found this post from a last.fm employee, which has been rather useful. I've also discovered that the tcp_max_syn_backlog has no effect when syncookies are enabled (as per this link) So as a next step I set the following in sysctl.conf: net.ipv4.tcp_syn_retries = 3 # default=5 net.ipv4.tcp_synack_retries = 3 # default=5 net.ipv4.tcp_max_syn_backlog = 65536 # default=1024 net.core.wmem_max = 8388608 # default=124928 net.core.rmem_max = 8388608 # default=131071 net.core.somaxconn = 512 # default = 128 net.core.optmem_max = 81920 # default = 20480 I then setup my response time test, ran sysctl -p and disabled syncookies by sysctl -w net.ipv4.tcp_syncookies=0. After doing this the number of connections in the SYN_RECV state still remained around 220-250, but connections were starting to delay again. Once I noticed these delays I re-enabled syncookies and the delays stopped. I believe what I was seeing was still an improvement from the initial state, however some requests were still delayed which is much worse than having syncookies enabled. So it looks like I'm stuck with them enabled until we can get some more servers online to cope with the load. Even then, I'm not sure I see a valid reason to disable them again as they're only sent (apparently) when the server's buffers get full. But the syn backlog doesn't appear to be full with only ~250 connections in the SYN_RECV state! Is it possible that the SYN flooding message is a red herring and it's something other than the syn_backlog that's filling up? If anyone has any other tuning options I haven't tried yet I'd be more than happy to try them out, but I'm starting to wonder if the syn_backlog setting isn't being applied properly for some reason.

    Read the article

  • Change Audio title from English to Sinhalese using ffmpeg

    - by user330461
    I insert an extra Sound track in my video file and it works well. ffmpeg -i news.mov -i news.wav -map 0:0 -map 0:1 -map 1:0 -pass 1 -vcodec libx264 -preset fast -b 512k -minrate 512k -maxrate 512k -bufsize 512k -threads 0 -f mp4 -an -y /dev/null && ffmpeg -i news.mov -i news.wav -map 0:0 -map 0:1 -map 1:0 -pass 2 -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -preset fast -b 512k -minrate 512k -maxrate 512k -bufsize 512k -threads 0 -f mp4 news.mp4 The default audio track come with the label "English" and I would like to give it a label "Sinhalese" The Second Audio track come up without a label as "track#1" and I would like to give that a label of "Tamil". How do I do that ?

    Read the article

  • My control key doesn't work, how do I fix it??

    - by Blaine LaFreniere
    My control key on the right doesn't work how it should. E.g. Right ctrl + T won't open new tabs in firefox, right ctrl + w won't switch windows in vim, etc. I know the key isn't physically broken, because xev shows that the right ctrl key generates events, but it just isn't responding as I expect it to in applications. Screenshot: http://i46.tinypic.com/33w1h76.png I tried Kim's answer but it still doesn't work. blaine@blaine-laptop ~ $ xmodmap -pke | grep 105 keycode 105 = Control_R Control_R Control_R Control_R Control_R Tried to map as Control_L as well, didn't work. The computer is a laptop, I am unable to plug the keyboard in to another computer.

    Read the article

  • How to prevent unison syncronize file when file process uploading

    - by user134600
    I use CentOS 5.8 Final. My situation is I running unison with cron where script below : */1 * * * * /usr/bin/unison /dev/null 2&1 and default profile like below : root = /var/www root = ssh://web02.example.com//var/www auto=true batch=true confirmbigdel=true fastcheck=true group=true owner=true prefer=newer silent=true times=true So in every minutes will syncronized www folder . My problem are : I upload file with size bigger than 10 MB to www from client with user1 permission where www folder is user1 owner. file in processing uploading then unison running in that minute and suddenly file upload owner changed to root:root When I editing file in www folder then I save when unison running, file owner changed to root:root where should be user1:user1 Is there anyone know about this problem?

    Read the article

  • Multiple interfaces to one IP address?

    - by Delan Azabani
    At present, I have: a Netgear router with DHCP off at 192.168.0.1 my computer eth0 at 192.168.0.2 wlan0 at 192.168.0.2 The wlan0 interface always connects to the router, while the eth0 interface connects to other computers with crossover and acts as a dnsmasq DHCP server for network boot and installation. If I use the Gnome NetworkManager to enable both connections, that is, with wlan0 connected to the router/internet and eth0 to another computer, both as 192.168.0.2, I cannot access the internet while eth0 is connected. Why is this? How can I configure my computer to follow wlan0 for Internet usage, but use eth0 for itself (the latter is working but blocking wlan0).

    Read the article

  • Adding a jar file to CLASSPATH is still not executable

    - by Simon O'Hanlon
    Perhaps I just don't understand how the whole CLASSPATH environment variable works when trying to find .jar files on your system. I thought if you specified it, you could launch .jar files with java in much the same way that you can launch executables that are on your path. I have an executable java archive (.jar file) on my system, that I stuck in /usr/local/bin/gatk/. I added this to my CLASSPATH via: export CLASSPATH=/usr/local/bin/gatk/GenomeAnalysisTK.jar I thought this would make the .jar file visible to my JVM. When I try to invoke it with java -jar GenomeAnalysisTK.jar #Error: Unable to access jarfile .gatk/GenomeAnalysisTK.jar I can invoke it setting the absolute path, e.g. java -jar /usr/local/bin/gatk/GenomeAnalysisTK.jar, however I'd rather not type the full path each time. I have read many of the linked tutorials but somehow I don't seem to be getting this right and I can't understand what I am doing wrong.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >