Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 642/1620 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • Cron job failing to backing up a Postgres database

    - by user705142
    I'm unsure what's going on here: I've got a backup script which runs fine under root. It produces a 300kb database dump in the proper directory. When running it as a cron job with exactly the same command however, an empty gzip file appears with nothing in it. The cron log shows no error, just that the command has been run. This is the script: #! /bin/bash DIR="/opt/backup" YMD=$(date "+%Y-%m-%d") su -c "pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz" " postgres # delete backup files older than 60 days OLD=$(find $DIR -type d -mtime +60) if [ -n "$OLD" ] ; then echo deleting old backup files: $OLD echo $OLD | xargs rm -rfv fi And the cron job: 01 10 * * * root sh /opt/daily_backup_script.sh It produces a database_backup file, just an empty one. Anyone know what's going on here?

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • What antivirus software supports updates without an internet connection?

    - by Michael Gundlach
    I'm putting antivirus software on Windows 7 computers in the middle of Africa. The computers don't have internet access, but still need to be protected against viruses from CDs and thumbdrives. Separate from these computers is one computer that does have extremely spotty internet access. What's the best AV software for this situation? The important part, as I see it, is that we need to keep the computers up to date, but can't let the AV software suck down updates at its leisure: the computers are disconnected, and getting emails onto the connected computer is a challenge enough. We thought we might transfer update files to the connected computer using a protocol that can handle repeated connection drops (e.g. FTP with resume.) Then we'd manually apply the update files to the disconnected computers. Does any AV software support this? Is there a better solution?

    Read the article

  • Slow WLAN file transfer between server and tablet

    - by user266985
    My file server is running Ubuntu 12.04 and I'm sharing files from it over samba. It is connected via gigabit ethernet. My desktop, running Windows 8.1, is also connected via gigabit ethernet. I can transfer files between the two and completely saturate that gigabit pipe. However, I just got a Surface Pro 2, and I'm trying to stream HD movies from my server to the device over WiFi. For some reason, I can't break much past 1.5MB/s transferring files over the network. I've tried streaming through XBMC and a standard file copy; no difference. To add the confusion, if I connect to my guest network and then use my VPN server (installed on the router) to access the file server, I get around 3.2MB/s. I've been running diagnostics to determine the root and I think I've found it but I have no idea what is causing it or how to fix it. Router: Asus RT-N66U Surface Pro 2 Network Card: Marvell Avastar 350N (Driver 19/09/2013 v14.69.24044.150) InSSIDer: Link Score: 100 Co-Channels: 0 Overlapping: 0 5GHz Network Channel: 48+44 iperf File Server as Server; Surface Pro 2 as Client - TCP Performance: Acceptable ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.90 port 5001 connected with 192.168.0.56 port 57367 [ ID] Interval Transfer Bandwidth [ 4] 0.0- 1.0 sec 10.1 MBytes 84.7 Mbits/sec [ 4] 1.0- 2.0 sec 10.4 MBytes 87.6 Mbits/sec [ 4] 2.0- 3.0 sec 10.6 MBytes 88.8 Mbits/sec [ 4] 3.0- 4.0 sec 10.7 MBytes 89.5 Mbits/sec [ 4] 4.0- 5.0 sec 10.1 MBytes 84.4 Mbits/sec [ 4] 5.0- 6.0 sec 10.2 MBytes 85.8 Mbits/sec [ 4] 6.0- 7.0 sec 7.04 MBytes 59.1 Mbits/sec [ 4] 7.0- 8.0 sec 10.8 MBytes 90.2 Mbits/sec [ 4] 8.0- 9.0 sec 10.6 MBytes 89.1 Mbits/sec [ 4] 9.0-10.0 sec 8.62 MBytes 72.3 Mbits/sec [ 4] 0.0-10.0 sec 99.2 MBytes 83.1 Mbits/sec iperf Surface Pro 2 as Server, File Server as Client Performance: Poor ------------------------------------------------------------ Client connecting to 192.168.0.56, TCP port 5001 TCP window size: 22.9 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.0.90 port 40233 connected with 192.168.0.56 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 1.0- 2.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 2.0- 3.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 3.0- 4.0 sec 1.25 MBytes 10.5 Mbits/sec [ 3] 4.0- 5.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 5.0- 6.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 6.0- 7.0 sec 1.38 MBytes 11.5 Mbits/sec [ 3] 7.0- 8.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 8.0- 9.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 9.0-10.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 0.0-10.1 sec 15.0 MBytes 12.4 Mbits/sec For some reason, it gets capped and I haven't got a clue why. Any suggestions? Edit: My link speed is reported as 270Mbps by Windows. I'm less than two metres from the router with a clear line of sight.

    Read the article

  • Sharepoint 2007 reset permission inheritance

    - by e-mre
    I have this SharePoint 2007 document library which has several levels of folders and files. Some folders in the middle of the hierarchy do not inherit permissions from their parents and have their unique permissions defined. It is a huge library and there are many folders like this. I am currently changing the permission model of my library and I want to reset all those unique permissions and have all of them inherit permissions from the library root. (Something like "Replace child object permissions" checkbox available in windows files system security window) If this is not possible, seeing a list of folders that have their unique permissions defined would also do.

    Read the article

  • Excel - Disable AutoFormatting on Import

    - by Philip Wales
    How can I stop Microsoft Excel from auto formatting data when imported from a text file? Specifically, I want it to treat all of the values as text. I am auditing insurance data in excel before it is uploaded to the new database. The files come to me as tab delimited text files. When loaded, Excel auto-formats the data causing leading 0's on Zip Codes, Routing Numbers and other codes, to be chopped off. I don't have the patience to reformat all of the columns as text and guess how many zeros need to be replaced. Nor do I want to click through the import wizard an specify that each column is text. Ideally I just want to turn off Excel's Auto-Formatting completely, and just edit every cell as it were plain text. I don't do any formula's or charts, just grid plain text editing.

    Read the article

  • How can I make .vimrc read from an external file?

    - by GorillaSandwich
    I'd like to modify my .vimrc to read the value of a variable from an external file. How can I do this? Specifically, a friend and I share a git repo with our .vim files, but there are a few small differences in what we want in our configs. So most of the file is common, but we use if statements to determine whether to load user-specific sections, like this: let whoami = "user2" if whoami == "user1" ... After checking our common .vimrc out of source control, we each have to change the let whoami assignment so our own section will be loaded. Instead, I'd like to keep a separate file, which can be different for each of us, and from which vim will load that variable value. Maybe another angle on this is: Will vim automatically read all the files in my .vim directory? If so, we could each put a symlink in there called username.vim, and link that to an external file that would be different for each of us.

    Read the article

  • Windows 7 - cannot access my own external disk

    - by Tomas
    I use Windows 7 Home Premium and external USB disk with NTFS partition. I cannot write-access the my own files on it, even as a member of Admnistrators group! Is there any way how to go around this permission checking, without actually writing some permission information to every folder on it? I have 3 external disks (up to 1TB), and I have thousands hundreds of files on each!!! Doing some permission change, that will actually go recursivelly through all folders on all my disks is plain brain damage!! 1) Is there any way how to change it somehow globally? (like mount options...) .. Or how to go around this annoying permission checking? It was working in Win XP normally! 2) if not, and I must do the recursive operation on all folders, how to do it PERMANENTLY, so that I don't need to do it again on another Windows 7 computer!

    Read the article

  • large RAID 10 vs small RAID1

    - by user116399
    The machine will store and serve millions of small files (<15Kb each), and all those files require a total storage space of 400G Considering the exact same SATA hard drives maker and models, on the exact same environment (OS, cpu, ram, raid controller, etc...) which one of the setups bellow would be faster? A) RAID 1 with 2 drives of 2T each, making up total storage of 2T B) RAID 10 with 4 drives of 2T each, making up total storage of 4T [EDIT]: I'm aware RAID10 is faster than RAID1. The larger the disk, at least in theory, the longer will take to do seeks/writes. So, will the performance gain of RAID10 will be outweighed by the "drag" caused the larger disk area when seek/write operations happened?

    Read the article

  • Prioritize file sharing performance in Windows Server 2008

    - by cmbrnt
    I've got a server running Windows Server 2008, and use it mainly for sharing files throughout the domain from a number of disks. It's running on VMware ESXi 4.0, in case that matters. My problem is that when I log in to the server to check user permissions etc, the access speed the files on the remote disks almost grinds to a halt. I havn't been able to measure the speeds, but I would guess it slows down to about 100kB/s as soon as I log in. This is on a gigabit network and the problems are equal for all users, even the ones connected to the same switch as the server. I've assigned 2 GB RAM to the server, and reserved it 1,5Ghz processor power. I don't have to do anything special on the server for this halt to occur. How can I make sure file sharing is prioritized on the server, so no matter what applications I'm using it will always make sure file sharing works properly? Could this be a VMware issue?

    Read the article

  • Accidentally deleted the software for MyPassport Essential SE 1TB Hardrive

    - by user26192
    Hi, I'm posting for a friend of mine. She bought a WD MyPassport Essential SE 1 TB Hard drive the other day. When she plugged in the USB in her lap top, the driver cannot be recognized by the smart ware software. While she was doing a back up of her files, McAfee was running in the background. Since the backup was taking so long to finish, she decided to pause it. She tried to delete the partially backed up files, but instead, she accidentally deleted the entire file in the folder including the pre-installed software. Now, when she tries to start up the MyPassport, the smart ware doesn't show up anymore. Can someone please give us advice what can she do about this? Thank you.

    Read the article

  • Textwrangler (OS X) -- Simple Text Macro Help Needed

    - by bobber205
    I often, when parsing log/error files, need to replace < and < and > with in order to be able to efficiently understand what's going on in the files. I know TextWrangler has a macro ability but I can't figure out a efficient way to do this. Since I have to do it so often I'd love to just have a simple keybinding or menu item to do this simple replace/find all for me. Anyone know how to do this? ^_^

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • What to do when you can not type a letter in Cygwin/bash

    - by Stenemo
    I had a very strange issue that happened as I was editing .bashrc or possible .profile, which made it impossible to press the letter "a" (it is not showing up on screen, although I am able to type it in all other programs as usual. I am not sure, but I was trying to get aliases to work on my computer at the time, so it is possible that I somehow aliased a to "", although I am not sure how that would have happened. I solved this by copying all the files in "cygwin\etc\skel\" (these are the backup starting files in case you ever need to replace them) into my home folder. Just leaving this question here so that other people which run into the same problem know what to do, not sure why I am unable to press "solve your question" at the moment, but I hope that someone who reads this knows how to edit this question so that the next person with this problem knows what to do. Also, not sure if this belongs in this forum or another one, but guess it is more of a unix question.

    Read the article

  • Best practice for administering a (hadoop) cluster

    - by Alex
    Dear all, I've recently been playing with Hadoop. I have a six node cluster up and running - with HDFS, and having run a number of MapRed jobs. So far, so good. However I'm now looking to do this more systematically and with a larger number of nodes. Our base system is Ubuntu and the current setup has been administered using apt (to install the correct java runtime) and ssh/scp (to propagate out the various conf files). This is clearly not scalable over time. Does anyone have any experience of good systems for administering (possibly slightly heterogenous: different disk sizes, different numbers of cpus on each node) hadoop clusters automagically? I would consider diskless boot - but imagine that with a large cluster, getting the cluster up and running might be bottle-necked on the machine serving the OS. Or some form of distributed debian apt to keep the machines native environment synchronised? And how do people successfully manage the conf files over a number of (potentially heterogenous) machines? Thanks very much in advance, Alex

    Read the article

  • stsadm farm backup exits with ffffffff

    - by overbyte
    I have a SP2007 farm that uses stsadm through Scheduled Tasks to run farm backups. It always worked fine, however it ran for a couple of seconds one day and just exited with code ffffffff. Looked at Event Viewer, the Sharepoint logs themselves and nothing unusual happened at the time this job ran. No files were created so an spbackup.log doesn't exist. Searched the net for batch files and STSADM return codes but the error message doesn't even exist. Any other recommended place to look for issues like this?

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • Print each bookmark of a PDF separately

    - by Dave
    I have a very large (1000 page) PDF which contains about 100, ten page each documents one after the other. I would like to have them sent to my office printer as individual files so my office printer will print them double sided and staple each one individually. I'm using Adobe Acrobat X and think the first step is to bookmark the start of each of those 100 documents. I don't know the next step though. I also have a batch printing program so if i can extract each of those 100 bookmarks to individual files that would work too. Thanks for all the help.

    Read the article

  • How can I check the actual size used in an NTFS directory with many hardlinks?

    - by kbyrd
    On a Win7 NTFS volume, I'm using cwrsync which supports --link-dest correctly to create "snapshot" type backups. So I have: z:\backups\2010-11-28\cygdrive\c\Users\... z:\backups\2010-12-02\cygdrive\c\Users\... The content of 2010-12-02 is mostly hardlinks back to files in the 2010-11-28 directory, but there are a few new or changed files only in 2010-12-02. On linux, the 'du' utility will tell me the actual size taken by each incremental snapshot. On Windows, explorer and du under cygwin are both fooled by hardlinks and shows 2010-12-02 taking up a little more space than 2010-11-28. Is there a Windows utility that will show the correct space acutally used?

    Read the article

  • Turn off write barriers on ext4 whiche FS is mounted

    - by user462982
    I am doing some IO intensive DB imports that run for several days now and the IO performance has dropped tremendously over times. The DB data files (log files) are on an ext4 formatted logical volume which is mounted with default options (did not specify something special in fstab). Since I just learned that ext4 enables write barriers by default: Q: Is there some way to disable write barriers online (i.e. while the file system is in use), because I cannot interrupt the import and don't want to restart it again. I am aware that write barriers might not be the only thing impeding performance it is a bad idea to have write barriers disabled on journalling file systems if data safty is important (e.g. on a production system)

    Read the article

  • External Hard Drive needs format problem

    - by Saher
    I recently bought a new ADATA external Classic hard drive 500GB. I have transferred around 29GB of data on it till I install my new windows 7 operating system. After some work with the hard drive (copying / deleting ... files) . I closed it for some reason and it couldn't open again asking me to format. I don't want to format the hard drive, I have important data I need...Is there a way I can retrieve my data. Is Recover My Files program from GetData a right choice??? part 2 of my question: why might such thing happen (require format to open), is it the hard drive problem or is it just a corrupted file or folder...??? Thanks,

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • How to generate good serials for DNS zones with Puppet?

    - by Bittrance
    My tradition is to set all zone serials to the timestamp at modification. Now that Puppet is my new religion, I want to set serial timestamps when building zone files from exported resources. A somewhat trivialized example may look like this: file { "/tmp/dafile": content = inline_template("<%= Time.now.to_i %>"), } The problem with this approach is that content will be different all the time, which will (ultimately) provoke rebuilding of zone files on each puppet config poll. Is there some way I can insert a timestamp without it being included in the data that is compared against previous state?

    Read the article

  • What will Time Machine do when

    - by Joel Budgor
    When Time Machine says "I will delete the oldest files first" does it mean this literally. Here is a theoretical example. Source Drive: 300 GB, consisting of 1 280 GB file and a 1 GB file. Backup Drive: 300 GB The initial backup will backup both files, using 281 GB. If I modify the 1 GB file 21 times, what will Time machine do when I run out of room on the backup drive; Delete the original 280 GB because it is the oldest file or delete the oldest version of the file I have modified 21 times. I hope it would delete the oldest version of the file I have modified 21 times, but I want to be sure. Thanks, Joel Budgor

    Read the article

  • Eclipse Juno Switch Editor in Order

    - by inspectorG4dget
    In case it matters: OS: Mac OS X Lion (10.7.4) Eclipse: Juno, Build id: 20120614-1722 I have several files open in my eclipse workspace as tabs. The default shortcuts for previous and next editors are ?F6 and ?shiftF6. I know how to change these shortcuts, that's not the issue. However, what I want to do, is switch between editors in the way in which they're ordered in the tab bar. Currently, the editors change in order of last used/viewed. So, if I had three files (A, B and C in order) open and I'm currently editing A and I edited B last, when I use the shortcut for "Previous Editor", it takes me to B instead of C (and vice versa). Is there any way for me to get this functionality out of eclipse (if so, how)? Thank you

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >