Search Results

Search found 3160 results on 127 pages for 'nomad zero'.

Page 89/127 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • On linux, what does it mean when a directory has size 0 instead of 4096?

    - by kdt
    Here's a strange thing I haven't seen before -- a directory whose size is reported by ls as 0 instead of 4096, and I can't create any files within it. # ls -ld lib home drwxr-xr-x. 2 root root 0 Feb 7 03:10 home <-- it has zero size dr-xr-xr-x. 11 root root 4096 Feb 4 09:28 lib # touch home/foo touch: cannot touch `home/foo': No such file or directory <-- and I can't create files in it # rm home rm: cannot remove `home': Is a directory <-- look, it really is a dir So what does it mean for a directory to have size 0 instead of 4096? Filesystem is ext4 on fedora core 14. The output of mount is: /dev/mapper/vg_dev-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Output of du -s /home: 0 /home Output of stat /home: File: `/home' Size: 0 Blocks: 0 IO Block: 1024 directory Device: 15h/21d Inode: 34913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-02-07 03:45:46.188995765 -0800 Modify: 2011-02-07 03:11:59.980995019 -0800 Change: 2011-02-06 07:58:45.874995002 -0800

    Read the article

  • Do background processes get a SIGHUP when logging off?

    - by Massimo
    This is a followup to this question. I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null. The behaviour is exactly the same: Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg); this is done without using nohup. Log off. Log on again. The process is still there, running happily, and is now a direct child of init. I can confirm both SCP and CAT quits immediately if sent a SIGHUP; I tested this using kill -HUP. So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons). This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4. The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening? Edit I checked with strace, as per Kyle's answer. As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH.

    Read the article

  • What can be a reason for phpMyAdmin login to be not working (not at all, no reaction on submit)?

    - by Ivan
    When I open "http://localhost/phpmyadmin/", enter "root" as the user name and my MySQL root password and press go, then if I was using Firefox, I was getting offered to download index.php file (of a zero length), if I was using Opera 11, it said "Connection closed by remote server". Following recommendations I've removed all packages related to phpMyAdmin, PHP, MySQL and Apache and then reinstalled them step-by step (instead of just issuing apt-get install phpmyadmin and relying on the system to install the whole LAMP stack via dependencies as I've done before). The only change I've got was Firefox to stop offering to download index.php - now when I press Ok to submit my password, it just doesn't show any visible reaction at all. What may the reason be and how to fix it? I use up-to-date Xubuntu 11.04. Reinstalling the whole LAMP stack and phpMyAdmin did not help, neither did removing AppArmor. I've tried to use SQLBuddy instead, but there's exactly the same problem. So, I think, the problem is not in phpMyAdmin but in MySQL, Apache or something. MySQL seems to work if I use command line to access it. Apache & PHP seems to work also, as the login page of phpMyAdmin displays correctly.

    Read the article

  • Windows 7: Wi-Fi connection drops intermittently - only returns after "Troubleshoot connection" resets the adapter

    - by sleske
    On our laptop (running Windows 7) the Wi-Fi connection drops intermittently. Symptoms: Connectivity is suddenly lost, and the "signal strenght" indicator in the tray shows zero strength and a yellow "star" symbol. What happens then: The problem does not resolve itself by just waiting. If I click on the tray icon, the "Windows network diagnostics" wizard pops up and tells me that there is a networking problem (duh). If I click on the "repair" button (not sure about the wording), the wizard works for a while, then reports that it has reset the network adapter. Then Wi-Fi works again. While the above procedure has worked every time so far, it is very annoying. It takes 10-20s to repair the connection, and in the meantime downloads, video streams etc. may have been aborted. Some more details: The problem occurs without any apparent regularity, but usually a few minutes after powerup (though not every time). It happens frequently enough to be annoying. It is unlikely to be a router problem - another laptop running at the same time usually has no Wi-Fi problems. I am at a loss about what to try to troubleshoot this. Any ideas? Computer: Acer Aspire 7739Z. Wi-Fi card: Atheros AR5B125

    Read the article

  • Hung Java JVM failing to respond to kill -3

    - by Hans
    I have a Java VM that is hanging "randomly". I quote the randomly bit, because there is obviously a reason that the VM is hanging, but the hang does not occur periodically. We have the same software running in different customer environments and in those environments the JVM is not hanging. In the process of attempting to troubleshoot the hang the process exists with zero CPU utilization. I then attempt to execute kill -3 and the kill command hangs. No JVM Thread Dump is produced. I have spent time instrumenting the code to periodically log the thread stack traces hoping to catch the JVM in a state that would indicate where the issue lies, but so far this attempt has not born much fruit. Unfortunately I have not been able to reproduce this issue in my lab environment so I am limited by what can be done at the Customer site. The OS's in question are Red Hat Enterprise 5.4 and SUSE 10 running java version 1.6.0_05-b13 Has anyone had this problem? Any ideas on why kill -3 is failing to produce a Java Thread Dump? Thanks!

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell) ?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • How is network mounted software executed?

    - by CptSupermrkt
    I would like to understand how network mounted software works. For example, at my place of work, we have a software server. Each client machine (hundreds of them) automatically mounts directories from the software server on boot. For example, a program like Matlab is installed just once on the software server, but each client machine can start up an instance of Matlab. What is going on under the hood? Let's say I run /opt/bin/matlab and /opt/ is mounted from the software server, what happens when I press Enter to execute matlab on a client machine? The process is on the client machine, and I've already narrowed down that there isn't any implicit or hidden file transfer (i.e. copying matlab to my machine temporarily for that session) by running matlab on a computer with nearly zero disk space (i.e. not enough room to transfer). Since Matlab was installed on the server, how is my client computer executing it? What mechanism is controlling this? What is happening behind the scenes?

    Read the article

  • How can I stop Excel from eating my delicious CSV files and excreting useless data?

    - by atroon
    I have a database which tracks sales of widgets by serial number. Users enter purchaser data and quantity, and scan each widget into a custom client program. They then finalize the order. This all works flawlessly. Some customers want an Excel-compatible spreadsheet of the widgets they have purchased. We generate this with a PHP script which queries the database and outputs the result as a CSV with the store name and associated data. This works perfectly well too. When opened in a text editor such as Notepad or vi, the file looks like this: "Account Number","Store Name","S1","S2","S3","Widget Type","Date" "4173","SpeedyCorp","268435459705526269","","268435459705526269","848 Model Widget","2011-01-17" As you can see, the serial numbers are present (in this case twice, not all secondary serials are the same) and are long strings of numbers. When this file is opened in Excel, the result becomes: Account Number Store Name S1 S2 S3 Widget Type Date 4173 SpeedyCorp 2.68435E+17 2.68435E+17 848 Model Widget 2011-01-17 As you may have observed, the serial numbers are enclosed by double quotes. Excel does not seem to respect text qualifiers in .csv files. When importing these files into Access, we have zero difficulty. When opening them as text, no trouble at all. But Excel, without fail, converts these files into useless garbage. Trying to instruct end users in the art of opening a CSV file with a non-default application is becoming, shall we say, tiresome. Is there hope? Is there a setting I've been unable to find? This seems to be the case with Excel 2003, 2007, and 2010.

    Read the article

  • Trouble getting started with the STEALTH monitoring package

    - by dlanced
    Is anyone here familiar with the Linux-based STEALTH package (for monitoring FS integrity of client systems)? I'm trying to get started with a very simple configuration, but I'm running into trouble (this is running under Ubuntu 14.04): Config line `USE BASE/root/stealth/10.0.0.79' invalid STEALTH (2.11.02) started at Fri, 30 May 2014 15:25:00 +0000 Program terminated due to non-zero exit value for -type f -exec /usr/bin/sha1sum {} \; (EOC Fri May 30 15:25:00 2014 127) Stealth is creating a binary tmp file in the Stealth server root and generating a "report" file in the start directory, but not much else. Regarding the "USE BASE...invalid" error, and just to be sure, I manually created the directories in /root, but it didn't help. And, by the way, I am running stealth with sudo. Everything seems to be configured correctly: I'm able to ssh into root@client from the stealth machine without a password Here's my "policy" file (I've removed the email directives just for simplicity): DEFINE SSHCMD /usr/bin/ssh [email protected] -T -q exec /bin/bash --noprofile DEFINE EXECSHA1 -xdev -perm +u+s,g+s ( -user root -or -group root ) \ -type f -exec /usr/bin/sha1sum {} \; USE BASE/root/stealth/10.0.0.79 USE SSH ${SSHCMD} USE DD /bin/dd USE DIFF /usr/bin/diff USE PIDFILE /var/run/stealth- USE REPORT report USE SH /bin/sh GET /usr/bin/sha1sum /root/tmp LABEL \nchecking the client's /usr/bin/find program CHECK LOG = remote/binfind /usr/bin/sha1sum /usr/bin/find LABEL \nsuid/sgid/executable files uid or gid root on the / partition CHECK LOG = remote/setuidgid /usr/bin/find / ${EXECSHA1} LABEL \nconfiguration files under /etc CHECK LOG = remote/etcfiles \ /usr/bin/find /etc -type f -not -perm /6111 \ -not -regex "/etc/(adjtime\|mtab)"\ -exec /usr/bin/sha1sum {} \; Any ideas? Thanks,

    Read the article

  • How do I send e-mails with attachments to a Microsoft WebTV user?

    - by Petr 'PePa' Pavel
    my friend uses Microsoft WebTV (e-mail address ends with @webtv.net) and I'd like to send him an e-mail with a picture attached to it. We went through a series of attempts one of which ended up a success, all others a failure. He just can't see my e-mail in his mailbox, when it contains an attachment. E-mails without attachments always go through all right. What seemed to help in the first successful case, was that he added my e-mail address to his address book and my e-mail suddenly showed up. Seemed to have been delivered before but hidden. He kept my address in his address book however, it didn't help with the following trials. He did look into his junk folder, nothing there. I made sure the file name contains no spaces. It's a regular jpeg, named something-like-this.jpg I downsized it to have only about 50k, as I've read somewhere that that's a limit. I actually doubt this piece of information, because I think the successful attempt was larger. webtv.net contains zero information. I watched their video demo for the e-mail client, so I at least know how the user interface looks like. I've never laid my hands on the real thing. I'm an advanced user myself (a programmer) but I can't wrap my mind around this. He on the other hand, is a very technically inexperienced user and because he's half way across the globe, I can't come and look over his shoulder. He doesn't have a computer, afaik there's no way I could see what he sees. Any ideas on how to debug this? Thanks for your time, guys. P.S. I can't tag this "webtv" because such tag doesn't exist yet and my reputation is too low, sorry.

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Microphone doesn't work

    - by mandy
    I'm having a trouble with my built in microphone. Even if I use headphone with mic, it doesn't really work. The weird thing is, if I clap the green lines of the speaker icon jumps, but if I speak it doesn't. I have also tried some recordings, but I can not hear myself and adjusting the volume didn't help at all. I tried to restore it, still no change. I have updated it in the device manager, but it said there that it's up to date and the devices are working properly. Until I decided to recover the whole system (wherein I pressed zero and switch on button) to my surprise, the settings became different, most programs were deleted, even my files. It's like it was formatted and I'm so sad that the mic was not fixed. I really don't know what to do now. My laptop model is Toshiba satellite m840. I want to it return to its settings/set up just before I Recover the system and bring back all the programs that we're installed and of course, most of all, to fix my microphone so I can use Skype again and other video calling application.. I hope someone could help me. Thanks a lot!

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Outlook 2010 PST not indexing

    - by kellyllek
    I've had this problem for months: Outlook is unable to search recent items in the inbox. I was running Outlook 2010 Beta. I've just moved to the Trial version, hoping to solve the issue. I have tons of PST files but one central one I'm mainly concerned with. As of now it seems none of it is indexing. I've been through all the sites and made all the changes; rebuilt the index, changed the name of the PST files, run scanpst, stopped and started the search services, made sure the Windows features under programs and features has the indexing option checked, etc... Status now says 'zero items left to index', and 150,000 have been indexed. I think I have a lot more files than that, and also nothing is showing up on any search. I'm not sure what else to do? Side question. I'm going to be moving out of Outlook. However I have 10Gigs+ of PST files over the years. I want to merge them and make them search-able in the easiest way possible. Any idea on how to do that? Could I even move over to Thunderbird right now and be able to index and search my PST files? Also, Google Desktop won't index Outlook 2010 email either...

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • What can be a reason for phpMyAdmin login to be not working (not at all, no reaction on submit)?

    - by Ivan
    When I open "http://localhost/phpmyadmin/", enter "root" as the user name and my MySQL root password and press go, then if I was using Firefox, I was getting offered to download index.php file (of a zero length), if I was using Opera 11, it said "Connection closed by remote server". Following recommendations I've removed all packages related to phpMyAdmin, PHP, MySQL and Apache and then reinstalled them step-by step (instead of just issuing apt-get install phpmyadmin and relying on the system to install the whole LAMP stack via dependencies as I've done before). The only change I've got was Firefox to stop offering to download index.php - now when I press Ok to submit my password, it just doesn't show any visible reaction at all. What may the reason be and how to fix it? I use up-to-date Xubuntu 11.04. Reinstalling the whole LAMP stack and phpMyAdmin did not help, neither did removing AppArmor. I've tried to use SQLBuddy instead, but there's exactly the same problem. So, I think, the problem is not in phpMyAdmin but in MySQL, Apache or something. MySQL seems to work if I use command line to access it. Apache & PHP seems to work also, as the login page of phpMyAdmin displays correctly.

    Read the article

  • Are there tools available for trimming PDF margins?

    - by Charles Duffy
    I have an ebook I'm trying to read in PDF format on a Kindle. Unfortunately, the page headers and footers have some content (page number and copyright info, respectively) preventing the device from scaling the actual text to match its usable area viewing area, thus leaving the actual content too small to read. Various tools are available which will trim off whitespace, but the Kindle already does this; my goal, by contrast, is to remove printed matter outside of a defined bounding box, and the only tool I've found for the purpose is moderately expensive commercial software. I could probably generate a mask in Inkscape; split out the individual pages using pdftk, apply the mask to each page individually (outputting to postscript), and recombine the numerous postscript files into a single PDF. However, this decode/reencode steps would be pretty unfortunate in terms of document size; something able to operate with a bit more finesse would be ideal. I have all major operating systems handy (Windows, several modern Linux distros, a Mac, etc) so solutions don't need to be constrained by platform. Suggestions? (I've reported the issue to the author, who mentioned it to his editor, who hasn't done anything about the issue over the course of more than a month, making the zero-work approach evidently nonproductive).

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • AWS instance went down for restart, came back up and has been "Waiting for network configuration... " for 12 hours

    - by kavisiegel
    What happened was about a month ago, I created a new instance, re-configured everything, and mounted the old instances drive to the new one. I then detached the old drive from the instance, but it was still mounted, so it errored out. Now, come the outage last month, and when the instance boots back up - the drive isn't there, because in the downtime, it dismounted. It hangs because the fstab is looking for it. I get word that the server's not up today, I check the logs, re-attach the drive and restart. Now I'm getting "Waiting for network configuration..." and it doesn't seem to be doing anything. Some googling told me that I'm going to need to start from zero again, which I don't think is right. I created a new instance, which booted no problem, then I stopped it and swapped the two old drives into the new instance. Still the same error. Using the fresh AMI, it worked. I figure it's just a misconfigured Amazon file. I can attach the drive to a functioning instance on the same AMI and copy some files over, then try again - but I don't know where to even start or what files to check.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >