Search Results

Search found 37883 results on 1516 pages for 'sparse files'.

Page 66/1516 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Compress with Gzip or Deflate my CSS & JS files

    - by muhammad usman
    i ve a fashion website & using wordpress. I want to Compress or Gzip or Deflate my CSS & JS files. i have tried many codes with .htaccess to compress but not working. Would any body help me please? My phpinfo is http://deemasfashion.co.uk/1.php below are the codes i have tried not not working. Few of them might be same but there is a difference in the syntax. <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> other code I have tried but not working... <files *.css> SetOutputFilter DEFLATE </files> <files *.js> SetOutputFilter DEFLATE </files> I have also tried this code as well but no success. <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> This code is also not working <FilesMatch "\.(html?|txt|css|js|php|pl)$"> SetOutputFilter DEFLATE </FilesMatch> Here is another code not working. <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x- javascript application/javascript </ifmodule> Here is another code not working. <IFModule mod_deflate.c> <filesmatch "\.(js|css|html|jpg|png|php)$"> SetOutputFilter DEFLATE </filesmatch> </IFModule> Here is another code not working. <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript text/javascript application/javascript application/json <FilesMatch "\.(css|js)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> Here is another code not working. #Gzip - compress text, html, javascript, css, xml <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript </ifmodule> #End Gzip Here is another code not working. <Location /> SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|gz2|sit|rar)$ no-gzip dont-vary </Location>

    Read the article

  • Windows Media Player Object with Video_TS Files - No Sound Anymore

    - by user1624184
    I copied a bunch of my DVDs (yes, owned) to a drive and created a basic webpage to be able to search and play them. I am using the code at the bottom to create a Windows Media Player object and then play the video. The video files are pulled off the DVD in the original format so each movie has files like this: VTS_01_0.BUP VTS_01_0.IFO VTS_01_1.VOB VTS_01_2.VOB VTS_01_3.VOB VTS_01_4.VOB VTS_01_5.VOB VTS_01_6.VOB This all worked great until just recently. On my dev machine, I can play the videos but now, all of a sudden, there is no sound. I have tried the following but no luck: The video is not muted and the sound is at 50% Under the sound icon, under mixer, I checked IE and it is fine Sound works fine using another program, says WinAmp Opening Windows Media Player directly from the Start Menu and then playing the same movie works fine and I get sound I ensured that the option in IE to play sound on websites is checked I can play sound on other people's websites where they have embedded WMP files I tried resetting all of the IE settings on the Advanced tab in IE I tried my website, with my code below on another computer, AND IT WORKS FINE! I tried copying the media files listed above from the computer that works fine to my dev computer and it still doesn't work. If I try using a ".wmv" file, the sound does work By the way, I am using Win7 with IE8. As you can see, this is driving me crazy! Why would it stop working on my one computer and the same code and files work fine on another computer? Any help would be greatly appreciated. <OBJECT id="mediaPlayer" width="640px" height="480px" style="position:absolute; left:0px; top:0px;" CLASSID="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" type="application/x-oleobject"> <PARAM NAME="URL" VALUE="E:\test\VIDEO_TS\VIDEO_TS.IFO" /> <PARAM NAME="SendPlayStateChangeEvents" VALUE="True"/> <PARAM NAME="AutoStart" VALUE="True"/> <PARAM NAME="uiMode" VALUE="full"/> <PARAM NAME="volume" VALUE="50" /> <PARAM NAME="PlayCount" VALUE="9999"/> <PARAM NAME="fullScreen" VALUE="true"/> <PARAM NAME="enableContextMenu" VALUE="true"/> </OBJECT>

    Read the article

  • How to merge .rpmnew files in Pluggable Authentication Modules (PAM)?

    - by Question Overflow
    A few .rpmnew files are being created after performing an upgrade of the Fedora OS. The normal procedure for merging .rpmnew files into the original ones is to compare the differences, make the necessary changes to the configuration on the .rpmnew files, and replace the original files with the new ones. However, the files contained in /etc/pam.d are links to files with same the filename appended with -ac, example: password-auth links to password-auth-ac and has password-auth.rpmnew as upgrade. How do I go about merging these files?

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • VIM "upgraded" to expandtab and tabstop=8 on Python files

    - by dotancohen
    After reinstalling my OS from Kubuntu 12.10 to Kubuntu 14.04, VIM has changed its behaviour when editing Python files. Though before the reinstall all file types had noexpandtab and tabstop=4 set, now in Python those values are expandtab and tabstop=8, checked also via VIM behaviour and also via asking VIM set foo?. Non-Python files retain the noexpandtab and tabstop=4 behaviour that I prefer. The .vim direcotry and .vimrc were not touched during the reinstall. It can be seen that no files in .vimrc have been touched in months (with the exception of the irrelevant .netrwhist): - bruno():~$ ls -lat ~/.vim total 68 drwxr-xr-x 85 dotancohen dotancohen 12288 Aug 25 13:00 .. drwxr-xr-x 12 dotancohen dotancohen 4096 Aug 21 11:11 . -rw-r--r-- 1 dotancohen dotancohen 268 Aug 21 11:11 .netrwhist drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 plugin drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 doc drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 syntax drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 ftplugin drwxr-xr-x 4 dotancohen dotancohen 4096 Nov 29 2013 autoload drwxrwxr-x 5 dotancohen dotancohen 4096 May 27 2013 after drwxr-xr-x 2 dotancohen dotancohen 4096 Nov 1 2012 spell -rw------- 1 dotancohen dotancohen 138 Aug 14 2012 .directory -rw-rw-r-- 1 dotancohen dotancohen 190 Jul 3 2012 .VimballRecord drwxrwxr-x 2 dotancohen dotancohen 4096 May 12 2012 colors drwxrwxr-x 2 dotancohen dotancohen 4096 Mar 16 2012 mytags drwxrwxr-x 2 dotancohen dotancohen 4096 Feb 14 2012 keymap Though .vimrc has been touched since the reinstall, it was only me testing to see where the problem is. How can I tell what is settingexpandtab and tabstop? Side note: I'm not even sure what I should read in the built-in help for this issue. I started with ":h plugin" but that did not help other than showing me that the following plugins are loaded (possibly relevant): standard-plugin-list Standard plugins pi_getscript.txt Downloading latest version of Vim scripts pi_gzip.txt Reading and writing compressed files pi_netrw.txt Reading and writing files over a network pi_paren.txt Highlight matching parens pi_tar.txt Tar file explorer pi_vimball.txt Create a self-installing Vim script pi_zip.txt Zip archive explorer LOCAL ADDITIONS: local-additions DynamicSigns.txt - Using Signs for different things NrrwRgn.txt A Narrow Region Plugin (similar to Emacs) fugitive.txt A Git wrapper so awesome, it should be illegal indent-object.txt Text objects based on indent levels. taglist.txt Plugin for browsing source code vimwiki.txt A Personal Wiki for Vim

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • Changing App.config at Runtime

    - by born to hula
    I'm writing a test winforms / C# / .NET 3.5 application for the system we're developing and we fell in the need to switch between .config files at runtime, but this is turning out to be a nightmare. Here's the scene: the Winforms application is aimed at testing a WebApp, divided into 5 subsystems. The test proccess works with messages being sent between the subsystems, and for this proccess to be sucessful each subsystem got to have its own .config file. For my Test Application I wrote 5 separate configuration files. I wish I was able to switch between these 5 files during runtime, but the problem is: I can programatically edit the application .config file inumerous times, but these changes will only take effect once. I've been searching a long time for a form to address this problem but I still wasn't sucessful. I know the problem definition may be a bit confusing but I would really appreciate it if someone helped me. Thanks in advance! --- UPDATE 01-06-10 --- There's something I didn't mention before. Originally, our system is a Web Application with WCF calls between each subsystem. For performance testing reasons (we're using ANTS 4), we had to create a local copy of the assemblies and reference them from the test project. It may sound a bit wrong, but we couldn't find a satisfying way to measure performance of a remote application. --- End Update --- Here's what I'm doing: public void UpdateAppSettings(string key, string value) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); foreach (XmlElement item in xmlDoc.DocumentElement) { foreach (XmlNode node in item.ChildNodes) { if (node.Name == key) { node.Attributes[0].Value = value; break; } } } xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); System.Configuration.ConfigurationManager.RefreshSection("section/subSection"); }

    Read the article

  • php & mySQL: Storing doc, xls, zip, etc. with limited access and archiving

    - by Devner
    Hi all, In my application, I have a provision for users to upload files like doc, xls, zip, etc. I would like to know how to store these files on my website and have only restricted people access it. I may have a group of people and let only these group access those uploaded files. I know that some may try to just copy the link to the document or the file and pass it to another (non-permitted) user and they can download it. So how can I prevent it? How can I check if the request to download the file was made by a legitimate user who has access to the file? The usernames of the group members are stored in the database along with the document name and location in the database so they can access it. But how do I prevent non-permitted users from being able to access that confidential data in all ways? With the above in mind, how do I store these documents? Do I store the documents in a blob column in the Database or just just let user upload to a folder and merely store the path to the file in the database? The security of the documents is of utmost importance. So any procedure that could facilitate this feature would definitely help. I am not into Object Oriented programming so if you have a simpler code that you would like to share with me, I would greatly appreciate it. Also how do I archive documents that are old? Like say there are documents that are 1 year old and I want to conserve my website space by archiving them but still make them available to the user when they need it. How do I go about this? Thank you.

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • C headers: compiler specific vs library specific?

    - by leonbloy
    Is there some clear-cut distinction between standard C *.h header files that are provided by the C compiler, as oppossed to those which are provided by a standard C library? Is there some list, or some standard locations? Motivation: int this answer I got a while ago, regarding a missing unistd.h in the latest TinyC compiler, the author argued that unistd.h (contrarily to sys/unistd.h) should not be provided by the compiler but by your C library. I could not make much sense of that response (for one thing shouldn't that also apply to, say, stdio.h?) but I'm still wondering about it. Is that correct? Where is some authoritative reference for this? Looking in other compilers, I see that other "self contained" POSIX C compilers that are hosted in Windows (like the GCC toolchain that comes with MinGW, in several incarnations; or Digital Mars compiler), include all header files. And in a standard Linux distribution (say, Centos 5.10) I see that the gcc package provides a few header files (eg, stdbool.h, syslimits.h) in /usr/lib/gcc/i386-redhat-linux/4.1.1/include/, and the glibc-headers package provides the majority of the headers in /usr/include/ (including stdio.h, /usr/include/unistd.h and /usr/include/sys/unistd.h). So, in neither case I see support for the above claim.

    Read the article

  • Django: Serving a Download in a Generic View

    - by TheLizardKing
    So I want to serve a couple of mp3s from a folder in /home/username/music. I didn't think this would be such a big deal but I am a bit confused on how to do it using generic views and my own url. urls.py url(r'^song/(?P<song_id>\d+)/download/$', song_download, name='song_download'), The example I am following is found in the generic view section of the Django documentations: http://docs.djangoproject.com/en/dev/topics/generic-views/ (It's all the way at the bottom) I am not 100% sure on how to tailor this to my needs. Here is my views.py def song_download(request, song_id): song = Song.objects.get(id=song_id) response = object_detail( request, object_id = song_id, mimetype = "audio/mpeg", ) response['Content-Disposition'= "attachment; filename=%s - %s.mp3" % (song.artist, song.title) return response I am actually at a loss of how to convey that I want it to spit out my mp3 instead of what it does now which is to output a .mp3 with all of the current pages html contained. Should my template be my mp3? Do I need to setup apache to serve the files or is Django able to retrieve the mp3 from the filesystem(proper permissions of course) and serve that? If it do need to configure Apache how do I tell Django that? Thanks in advanced. These files are all on the HD so I don't need to "generate" anything on the spot and I'd like to prevent revealing the location of these files if at all possible. A simple /song/1234/download would be fantastic.

    Read the article

  • Windows 8 DLNA streaming of MKV files

    - by dbb
    I have a Samsung TV that is capable of playing MKV files. The Windows DLNA Play To menu that appears when you right-click a media file does not support MKV files, but a simple trick has been to change the file extension from .mkv to .avi so the Play To context menu item would appear. At that point I could successfully stream from my computer to my TV. However, this does not appear to work in Windows 8. Doing the same thing in Windows 8 causes the Play To window to open but the file does not get played. Dragging and dropping the file in the Play To window causes it to be silently ignored. Using actual AVI, MP4, etc. files works, of course. It appears Windows 8 is now doing some kind of validation on the file that Windows 7 wasn't previously. The Play To window does not show any kind of obvious error message or warning and there is nothing in the Windows event log. So, is there a way in Windows 8 to stream MKV files to a DLNA device without converting it to another container format? I would rather not use extra third-party software, but I would consider it if it's purposefully designed for this simple case rather than a more robust "media library/server" solution.

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Creating a ssh tunnel to transfer files?

    - by Vincent
    For me, networks are a very "opaque" thing, and even with reading a lot of tutorial about SSH, I do not understand how to create a basic tunnel to transfer my files. The configuration is the following : My Computer --[Internet]--> Bridge Machine --[Local Network]--> Final Machine Currently I do the following : 1) Connect to the Bridge Machine with : ssh -X [email protected] 2) Connect to the Final Machine with : ssh -X username@finalmachine 3) I copy the address of files I need (for example .../mydirectory) 4) Then I deconnect from the finalmachine with : exit 5) I copy the files to the bridge : scp -r username@finalmachine:/.../mydirectory . 6) I deconnect from the bridge with : exit 7) I copy the files to my machine : scp -r [email protected]:/.../mydirectory . Which is quite complicated. My question is basic : how to simplify this using a SSH tunnel ? (and please explain me the signification of each command line you write, to understand what each line really do and to avoid to use it like a magical thing. Furthermore if some ports number are used, explain me if I can pick a completely random number or if I have to choose a specific one.)

    Read the article

  • Process files in a folder that haven't previously been processed

    - by Paul
    I have a series of files in a directory that I need to carry an action out on using a script. Once the action is done, then I want to keep a log that the file has been processed, so that the next time the script is run, it does not attempt to carry out the action again. So lets say I can find all the files that should be processed like this: for i in `find /logfolder -name '20*.log'` ; do process_log $i echo $i >> processedlogsfile done So I have a file containing the logs I have processed, and my goal would be to modify the for loop such that these processed logs are not processed a second time. Doing a manual scan each time seems inefficient, particularly as the processedlogfiles gets bigger: if grep -iq "$i" processdlogfiles ; then continue; fi It would be good if these files could be excluded when setting up the for loop. Note that the OS in question is a linux derivative, a managment appliance, with a limited toolset (no attr command for example) and so no way to install additional utilities (well it is possible but not an option). Most common bash shell commands are available though. Also, the filenames and locations of the processed files must remain where they are - they can't be altered to reflect their processed status

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Move postfix maildir files from one mail server to another

    - by Tauren
    I have a new mail server configured as described in this howto: http://howtoforge.com/virtual-users-domains-postfix-courier-mysql-squirrelmail-ubuntu-9.10 I also have an ancient mail server configured very similarly (using the same HOWTO, just for Fedora Core 6, if I recall correctly). Earlier today I had to switch from the old server to the new one, and the old one is no longer online. However, after I had migrated everything and switched it all over, I discovered a bunch of undelivered mail in the queue. It got delivered to the local mailboxes on the old server, so now there are a bunch of messages on it that I'd like to move to the new server. The new server has already received new messages, so I need to merge the files together somehow. For each user with an email of [email protected], there are files like this on both servers: /home/vmail/customer.com/username/maildirsize /home/vmail/customer.com/username/courierpop3dsizelist /home/vmail/customer.com/username/new/1271481177.Vca01I6006bM580357.mailhost.mydomain.com Can I simply copy the hundreds of files in the various new directories on the old server to the corresponding new directories on the new server? Will the maildirsize and courierpop3dsizelist files get updated automatically, or do I need to do something to update them?

    Read the article

  • Missing boot files in Windows 8

    - by Alex F. Sherman
    I had a partition with Windows 8 Release Preview, Windows' System Reserved partition and the empty space of the beginning of disk. I moved two partitions to the beginning of disk using Ubuntu Live CD and GParted. After that, the Windows Loader didn't boot and throw an error about missing files. I fixed it using the commands: bootsect /nt60 sys /force /mbr bootrec /rebuildbcd bootrec /fixboot bootrec /fixmbr When I used "Automatic repair" option from "Advanced boot" menu, it throw an error like: Windows can't fix your boot problems. For more information see file C:\Windows\System32\LogFiles\Srt\SrtTrail.txt In this file I found a description of the system repair actions and at the end of file: Boot status indicates that the OS booted successfully. Now, when I use the Advanced boot menu from Windows 8 (PC settings - General - Advanced startup) I receive an error: Restart your PC to try again. It looks like something didn't load correctly. Restarting might fix the problem. If this happens more than once, you might also be able to find help by searching online for the specific error code. Erorr code: 0x8007090. 0x80070490 is the error code ERROR_NOT_FOUND. What are the missing boot files and how can I restore them? List of files in System Reserved Partition: B:\bootmgr B:\BOOTNXT B:\Boot\BCD B:\Boot\BCD.LOG B:\Boot\BCD.LOG1 B:\Boot\BCD.LOG2 B:\Boot\BOOTSTAT.DAT B:\Boot\Fonts B:\Boot\memtest.exe B:\Boot\qps-ploc B:\Boot\Resources B:\Boot\Resources\bootres.dll and many *.mui and *.ttf files.

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • Recovering data from mangodb raw files

    - by Jin Chen
    we use mongodb for our database and set the replset(two servers), but we mistakenly deleted some raw files that under /path/to/dbdata on both servers, after we used tool to get back the deleted files(we ran the extundelete on both server and mix the result together), like database.1, database.2 etc. we could not start the mongod, it raised the following error when starting mongod or executing mongodump, here is the console output: root@mongod:/opt/mongodb# mongodump --repair --dbpath /opt/mongodb -d database_production Thu Aug 21 16:22:43.258 [tools] warning: repair is a work in progress Thu Aug 21 16:22:43.258 [tools] going to try and recover data from: database_production Thu Aug 21 16:22:43.262 [tools] Assertion failure isOk() src/mongo/db/pdfile.h 392 0xde1b01 0xda42fd 0x8ae325 0x8ac492 0x8bd8e0 0x8c1c51 0x80e345 0x80e607 0x80e6a4 0x6db92a 0x6dc1ff 0x6e0db9 0xd9e45e 0x6ccdc7 0x7f499d856ead 0x6ccc29 mongodump(_ZN5mongo15printStackTraceERSo+0x21) [0xde1b01] mongodump(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda42fd] mongodump(_ZNK5mongo7Forward4nextERKNS_7DiskLocE+0x1a5) [0x8ae325] mongodump(_ZN5mongo11BasicCursor7advanceEv+0x82) [0x8ac492] mongodump(_ZN5mongo8Database19clearTmpCollectionsEv+0x160) [0x8bd8e0] mongodump(_ZN5mongo14DatabaseHolder11getOrCreateERKSsS2_Rb+0x7b1) [0x8c1c51] mongodump(_ZN5mongo6Client7Context11_finishInitEv+0x65) [0x80e345] mongodump(_ZN5mongo6Client7ContextC1ERKSsS3_b+0x87) [0x80e607] mongodump(ZN5mongo6Client12WriteContextC1ERKSsS3+0x54) [0x80e6a4] mongodump(_ZN4Dump7_repairESs+0x3a) [0x6db92a] mongodump(_ZN4Dump6repairEv+0x2df) [0x6dc1ff] mongodump(_ZN4Dump3runEv+0x1b9) [0x6e0db9] mongodump(_ZN5mongo4Tool4mainEiPPc+0x13de) [0xd9e45e] mongodump(main+0x37) [0x6ccdc7] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f499d856ead] mongodump(__gxx_personality_v0+0x471) [0x6ccc29] assertion: 0 assertion src/mongo/db/pdfile.h:392 Thu Aug 21 16:22:43.271 dbexit: Thu Aug 21 16:22:43.271 [tools] shutdown: going to close listening sockets... Thu Aug 21 16:22:43.271 [tools] shutdown: going to flush diaglog... Thu Aug 21 16:22:43.271 [tools] shutdown: going to close sockets... Thu Aug 21 16:22:43.272 [tools] shutdown: waiting for fs preallocator... Thu Aug 21 16:22:43.272 [tools] shutdown: closing all files... Thu Aug 21 16:22:43.273 [tools] closeAllFiles() finished Thu Aug 21 16:22:43.273 [tools] shutdown: removing fs lock... Thu Aug 21 16:22:43.273 dbexit: really exiting now my env: 1) Debian 3.2.35-2 x86_64(it's a XEN virtual machine) 2) mongodb 2.4.6 and we did not delete the .0 and .ns files we tried to create a new database with the same name and copy these db.ns and db.2, db.3 to the new db, we still met the same error. is there any way to check the valid of raw .ns and datafiles, and how to recover the database?

    Read the article

  • Automated Scanning for Corrupted Archive Files

    - by Synetech inc.
    Hi, I have all kinds of files that I have downloaded from everywhere over the years scattered around my hard drives. I’m in the process of trying to organize them all and have run into a problem. Sometimes a file was not downloaded correctly (this was a big issue when Chromium first came out) and is thus corrupt. For media files, this is relatively simple to determine (it requires actually opening and examining the file). For executables or other binaries, it is relatively difficult (executables may or may not crash, other binaries could be completely unknown). Archive files (eg zip, rar, 7z, exe, ace, etc.) however should be pretty simple, they have a built-in corruption detection facility. My problem now is that I really don’t want to open and test each and every single archived file throughout my drive; that would be a nightmare. I’m looking for a utility that can automate the process. Is there a program that can scan archive files on a drive and list the ones that are corrupt? Thanks a lot.

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • Utility to LOGICALLY compare two xml files?

    - by Matthew
    Right now we are attempting to build golden configurations for our environment. One piece of software that we use relies on large XML files to contain the bulk of its configuration. We want tot ake our lab environment, catalog it as our "golden configuration" and then be able to audit against that configuration in the future. Since diff is bytewise comparison and NOT logical comparison, we can't use it to compare files in this case (XML is unordered, so it won't work). What I am looking for is something that can parse the two XML files, and compare them element by element. So far we have yet to find any utilities that can do this. OS doesn't matter, I can do it on anything where it will work. The preference is something off the shelf. Any ideas? Edit: One issue we have run into is one vendor's config files will occasionally mention the same element several times, each time with different attributes. Whatever diff utility we use would need to be able to identify either the set of attributes or identify them all as part of one element. Tall order :)

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >