Search Results

Search found 59543 results on 2382 pages for 'solution files'.

Page 97/2382 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • Compress with Gzip or Deflate my CSS & JS files

    - by muhammad usman
    i ve a fashion website & using wordpress. I want to Compress or Gzip or Deflate my CSS & JS files. i have tried many codes with .htaccess to compress but not working. Would any body help me please? My phpinfo is http://deemasfashion.co.uk/1.php below are the codes i have tried not not working. Few of them might be same but there is a difference in the syntax. <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> other code I have tried but not working... <files *.css> SetOutputFilter DEFLATE </files> <files *.js> SetOutputFilter DEFLATE </files> I have also tried this code as well but no success. <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> This code is also not working <FilesMatch "\.(html?|txt|css|js|php|pl)$"> SetOutputFilter DEFLATE </FilesMatch> Here is another code not working. <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x- javascript application/javascript </ifmodule> Here is another code not working. <IFModule mod_deflate.c> <filesmatch "\.(js|css|html|jpg|png|php)$"> SetOutputFilter DEFLATE </filesmatch> </IFModule> Here is another code not working. <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript text/javascript application/javascript application/json <FilesMatch "\.(css|js)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> Here is another code not working. #Gzip - compress text, html, javascript, css, xml <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript </ifmodule> #End Gzip Here is another code not working. <Location /> SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|gz2|sit|rar)$ no-gzip dont-vary </Location>

    Read the article

  • Windows Media Player Object with Video_TS Files - No Sound Anymore

    - by user1624184
    I copied a bunch of my DVDs (yes, owned) to a drive and created a basic webpage to be able to search and play them. I am using the code at the bottom to create a Windows Media Player object and then play the video. The video files are pulled off the DVD in the original format so each movie has files like this: VTS_01_0.BUP VTS_01_0.IFO VTS_01_1.VOB VTS_01_2.VOB VTS_01_3.VOB VTS_01_4.VOB VTS_01_5.VOB VTS_01_6.VOB This all worked great until just recently. On my dev machine, I can play the videos but now, all of a sudden, there is no sound. I have tried the following but no luck: The video is not muted and the sound is at 50% Under the sound icon, under mixer, I checked IE and it is fine Sound works fine using another program, says WinAmp Opening Windows Media Player directly from the Start Menu and then playing the same movie works fine and I get sound I ensured that the option in IE to play sound on websites is checked I can play sound on other people's websites where they have embedded WMP files I tried resetting all of the IE settings on the Advanced tab in IE I tried my website, with my code below on another computer, AND IT WORKS FINE! I tried copying the media files listed above from the computer that works fine to my dev computer and it still doesn't work. If I try using a ".wmv" file, the sound does work By the way, I am using Win7 with IE8. As you can see, this is driving me crazy! Why would it stop working on my one computer and the same code and files work fine on another computer? Any help would be greatly appreciated. <OBJECT id="mediaPlayer" width="640px" height="480px" style="position:absolute; left:0px; top:0px;" CLASSID="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" type="application/x-oleobject"> <PARAM NAME="URL" VALUE="E:\test\VIDEO_TS\VIDEO_TS.IFO" /> <PARAM NAME="SendPlayStateChangeEvents" VALUE="True"/> <PARAM NAME="AutoStart" VALUE="True"/> <PARAM NAME="uiMode" VALUE="full"/> <PARAM NAME="volume" VALUE="50" /> <PARAM NAME="PlayCount" VALUE="9999"/> <PARAM NAME="fullScreen" VALUE="true"/> <PARAM NAME="enableContextMenu" VALUE="true"/> </OBJECT>

    Read the article

  • How to merge .rpmnew files in Pluggable Authentication Modules (PAM)?

    - by Question Overflow
    A few .rpmnew files are being created after performing an upgrade of the Fedora OS. The normal procedure for merging .rpmnew files into the original ones is to compare the differences, make the necessary changes to the configuration on the .rpmnew files, and replace the original files with the new ones. However, the files contained in /etc/pam.d are links to files with same the filename appended with -ac, example: password-auth links to password-auth-ac and has password-auth.rpmnew as upgrade. How do I go about merging these files?

    Read the article

  • Capturing time intervals when somebody was online? How would you impement this feature?

    - by Kirzilla
    Hello, Our aim is to build timelines saying about periods of time when user was online. (It really doesn't matter what user we are talking about and where he was online) To get information about onliners we can call API method, someservice.com/api/?call=whoIsOnline whoIsOnline method will give us a list of users currently online. But there is no API method to get information about who IS NOT online. So, we should build our timelines using information we got from whoIsOnline. Of course there will be a measurement error (we can't track information in realtime). Let's suppose that we will call whoIsOnline method every 2 minutes (yes, we will run our script by cron every 2 minutes). For example, calling whoIsOnline at 08:00 will return Peter_id Michal_id Andy_id calling whoIsOnline at 08:02 will return Michael_id Andy_id George_id As you can see, Peter has gone offline, but we have new onliner - George. Available instruments are Db(MySQL) / text files / key-value storage (Redis/memcache); feel free to choose any of them (or even all of them). So, we have to get information like this George_id was online... 12 May: 08:02-08:30, 12:40-12:46, 20:14-22:36 11 May: 09:10-12:30, 21:45-23:00 10 May: was not online And now question... How would you store information to implement such timelines? How would you query/calculate information about periods of time when user was online? Additional information.. You cannot update information about offline users, only users who are "currently" online. Solution should be flexible: timeline information could be represented relating to any timezone. We should keep information only for last 7 days. Every user seen online is automatically getting his own identifier in our database. Uff.. it was really hard for me to write it because my English is pretty bad, but I hope my question will be clear for you. Thank you.

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • Visual Studio Config File Editor - Not Formatting

    - by Ian
    Hi All, My VS 2008 seems to be acting a bit weird and the solution is eluding me. The problem is that if I open a config file; app.config or web.config, this looks and behaves as a plain text document. I have no formatting, no coloring, no intellisense and no collapsible or expandable regions. I have reset all setting and restored default file associations. If I go into the setting menu, Text Editor, XML, formatting I see an error "An error occurred loading this property page" Has any one seen this before and have you go a solution. Thanks

    Read the article

  • How can I detect if a file is binary (non-text) in python?

    - by grieve
    How can I tell if a file is binary (non-text) in python? I am searching through a large set of files in python, and keep getting matches in binary files. This makes the output look incredibly messy. I know I could use grep -I, but I am doing more with the data than what grep allows for. In the past I would have just searched for characters greater than 0x7f, but utf8 and the like make that impossible on modern systems. Ideally the solution would be fast, but any solution will do.

    Read the article

  • VIM "upgraded" to expandtab and tabstop=8 on Python files

    - by dotancohen
    After reinstalling my OS from Kubuntu 12.10 to Kubuntu 14.04, VIM has changed its behaviour when editing Python files. Though before the reinstall all file types had noexpandtab and tabstop=4 set, now in Python those values are expandtab and tabstop=8, checked also via VIM behaviour and also via asking VIM set foo?. Non-Python files retain the noexpandtab and tabstop=4 behaviour that I prefer. The .vim direcotry and .vimrc were not touched during the reinstall. It can be seen that no files in .vimrc have been touched in months (with the exception of the irrelevant .netrwhist): - bruno():~$ ls -lat ~/.vim total 68 drwxr-xr-x 85 dotancohen dotancohen 12288 Aug 25 13:00 .. drwxr-xr-x 12 dotancohen dotancohen 4096 Aug 21 11:11 . -rw-r--r-- 1 dotancohen dotancohen 268 Aug 21 11:11 .netrwhist drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 plugin drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 doc drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 syntax drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 ftplugin drwxr-xr-x 4 dotancohen dotancohen 4096 Nov 29 2013 autoload drwxrwxr-x 5 dotancohen dotancohen 4096 May 27 2013 after drwxr-xr-x 2 dotancohen dotancohen 4096 Nov 1 2012 spell -rw------- 1 dotancohen dotancohen 138 Aug 14 2012 .directory -rw-rw-r-- 1 dotancohen dotancohen 190 Jul 3 2012 .VimballRecord drwxrwxr-x 2 dotancohen dotancohen 4096 May 12 2012 colors drwxrwxr-x 2 dotancohen dotancohen 4096 Mar 16 2012 mytags drwxrwxr-x 2 dotancohen dotancohen 4096 Feb 14 2012 keymap Though .vimrc has been touched since the reinstall, it was only me testing to see where the problem is. How can I tell what is settingexpandtab and tabstop? Side note: I'm not even sure what I should read in the built-in help for this issue. I started with ":h plugin" but that did not help other than showing me that the following plugins are loaded (possibly relevant): standard-plugin-list Standard plugins pi_getscript.txt Downloading latest version of Vim scripts pi_gzip.txt Reading and writing compressed files pi_netrw.txt Reading and writing files over a network pi_paren.txt Highlight matching parens pi_tar.txt Tar file explorer pi_vimball.txt Create a self-installing Vim script pi_zip.txt Zip archive explorer LOCAL ADDITIONS: local-additions DynamicSigns.txt - Using Signs for different things NrrwRgn.txt A Narrow Region Plugin (similar to Emacs) fugitive.txt A Git wrapper so awesome, it should be illegal indent-object.txt Text objects based on indent levels. taglist.txt Plugin for browsing source code vimwiki.txt A Personal Wiki for Vim

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • Copying files from one machine to another

    - by arex1337
    I'm currently generating documentation on one machine, and publishing it to a web server using the following commands in a script: net use "\\someShare" PASSWORD /user:username del /S /Q "\\someShare" xcopy /E /Y Documentation\html\* "\\someShare\" However, it feels like a really bad idea to have a password as plain text in the script, so I'm looking for alternatives to my current solution. Ideas? I definitely would appreciate a solution that uses some kind of access control, as many different people should be allowed to publish their own documentation to the web server, but not mess with each other's docs.

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

  • Automatically copy files to USB drive when connected

    - by Daphna
    I am looking for a solution for copying all the files from a specific directory on the hard drive, to a specific directory on a USB memory device, once this device is connected. I have a program that downloads podcast episodes for me. I would like these files to be automatically moved (or at least copied) to my mp3 player once I connect it to the computer. I have both windows xp and linux machines, so a solution for any of them will work for me.

    Read the article

  • Changing App.config at Runtime

    - by born to hula
    I'm writing a test winforms / C# / .NET 3.5 application for the system we're developing and we fell in the need to switch between .config files at runtime, but this is turning out to be a nightmare. Here's the scene: the Winforms application is aimed at testing a WebApp, divided into 5 subsystems. The test proccess works with messages being sent between the subsystems, and for this proccess to be sucessful each subsystem got to have its own .config file. For my Test Application I wrote 5 separate configuration files. I wish I was able to switch between these 5 files during runtime, but the problem is: I can programatically edit the application .config file inumerous times, but these changes will only take effect once. I've been searching a long time for a form to address this problem but I still wasn't sucessful. I know the problem definition may be a bit confusing but I would really appreciate it if someone helped me. Thanks in advance! --- UPDATE 01-06-10 --- There's something I didn't mention before. Originally, our system is a Web Application with WCF calls between each subsystem. For performance testing reasons (we're using ANTS 4), we had to create a local copy of the assemblies and reference them from the test project. It may sound a bit wrong, but we couldn't find a satisfying way to measure performance of a remote application. --- End Update --- Here's what I'm doing: public void UpdateAppSettings(string key, string value) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); foreach (XmlElement item in xmlDoc.DocumentElement) { foreach (XmlNode node in item.ChildNodes) { if (node.Name == key) { node.Attributes[0].Value = value; break; } } } xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); System.Configuration.ConfigurationManager.RefreshSection("section/subSection"); }

    Read the article

  • php & mySQL: Storing doc, xls, zip, etc. with limited access and archiving

    - by Devner
    Hi all, In my application, I have a provision for users to upload files like doc, xls, zip, etc. I would like to know how to store these files on my website and have only restricted people access it. I may have a group of people and let only these group access those uploaded files. I know that some may try to just copy the link to the document or the file and pass it to another (non-permitted) user and they can download it. So how can I prevent it? How can I check if the request to download the file was made by a legitimate user who has access to the file? The usernames of the group members are stored in the database along with the document name and location in the database so they can access it. But how do I prevent non-permitted users from being able to access that confidential data in all ways? With the above in mind, how do I store these documents? Do I store the documents in a blob column in the Database or just just let user upload to a folder and merely store the path to the file in the database? The security of the documents is of utmost importance. So any procedure that could facilitate this feature would definitely help. I am not into Object Oriented programming so if you have a simpler code that you would like to share with me, I would greatly appreciate it. Also how do I archive documents that are old? Like say there are documents that are 1 year old and I want to conserve my website space by archiving them but still make them available to the user when they need it. How do I go about this? Thank you.

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • C headers: compiler specific vs library specific?

    - by leonbloy
    Is there some clear-cut distinction between standard C *.h header files that are provided by the C compiler, as oppossed to those which are provided by a standard C library? Is there some list, or some standard locations? Motivation: int this answer I got a while ago, regarding a missing unistd.h in the latest TinyC compiler, the author argued that unistd.h (contrarily to sys/unistd.h) should not be provided by the compiler but by your C library. I could not make much sense of that response (for one thing shouldn't that also apply to, say, stdio.h?) but I'm still wondering about it. Is that correct? Where is some authoritative reference for this? Looking in other compilers, I see that other "self contained" POSIX C compilers that are hosted in Windows (like the GCC toolchain that comes with MinGW, in several incarnations; or Digital Mars compiler), include all header files. And in a standard Linux distribution (say, Centos 5.10) I see that the gcc package provides a few header files (eg, stdbool.h, syslimits.h) in /usr/lib/gcc/i386-redhat-linux/4.1.1/include/, and the glibc-headers package provides the majority of the headers in /usr/include/ (including stdio.h, /usr/include/unistd.h and /usr/include/sys/unistd.h). So, in neither case I see support for the above claim.

    Read the article

  • Django: Serving a Download in a Generic View

    - by TheLizardKing
    So I want to serve a couple of mp3s from a folder in /home/username/music. I didn't think this would be such a big deal but I am a bit confused on how to do it using generic views and my own url. urls.py url(r'^song/(?P<song_id>\d+)/download/$', song_download, name='song_download'), The example I am following is found in the generic view section of the Django documentations: http://docs.djangoproject.com/en/dev/topics/generic-views/ (It's all the way at the bottom) I am not 100% sure on how to tailor this to my needs. Here is my views.py def song_download(request, song_id): song = Song.objects.get(id=song_id) response = object_detail( request, object_id = song_id, mimetype = "audio/mpeg", ) response['Content-Disposition'= "attachment; filename=%s - %s.mp3" % (song.artist, song.title) return response I am actually at a loss of how to convey that I want it to spit out my mp3 instead of what it does now which is to output a .mp3 with all of the current pages html contained. Should my template be my mp3? Do I need to setup apache to serve the files or is Django able to retrieve the mp3 from the filesystem(proper permissions of course) and serve that? If it do need to configure Apache how do I tell Django that? Thanks in advanced. These files are all on the HD so I don't need to "generate" anything on the spot and I'd like to prevent revealing the location of these files if at all possible. A simple /song/1234/download would be fantastic.

    Read the article

  • Windows 8 DLNA streaming of MKV files

    - by dbb
    I have a Samsung TV that is capable of playing MKV files. The Windows DLNA Play To menu that appears when you right-click a media file does not support MKV files, but a simple trick has been to change the file extension from .mkv to .avi so the Play To context menu item would appear. At that point I could successfully stream from my computer to my TV. However, this does not appear to work in Windows 8. Doing the same thing in Windows 8 causes the Play To window to open but the file does not get played. Dragging and dropping the file in the Play To window causes it to be silently ignored. Using actual AVI, MP4, etc. files works, of course. It appears Windows 8 is now doing some kind of validation on the file that Windows 7 wasn't previously. The Play To window does not show any kind of obvious error message or warning and there is nothing in the Windows event log. So, is there a way in Windows 8 to stream MKV files to a DLNA device without converting it to another container format? I would rather not use extra third-party software, but I would consider it if it's purposefully designed for this simple case rather than a more robust "media library/server" solution.

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Creating a ssh tunnel to transfer files?

    - by Vincent
    For me, networks are a very "opaque" thing, and even with reading a lot of tutorial about SSH, I do not understand how to create a basic tunnel to transfer my files. The configuration is the following : My Computer --[Internet]--> Bridge Machine --[Local Network]--> Final Machine Currently I do the following : 1) Connect to the Bridge Machine with : ssh -X [email protected] 2) Connect to the Final Machine with : ssh -X username@finalmachine 3) I copy the address of files I need (for example .../mydirectory) 4) Then I deconnect from the finalmachine with : exit 5) I copy the files to the bridge : scp -r username@finalmachine:/.../mydirectory . 6) I deconnect from the bridge with : exit 7) I copy the files to my machine : scp -r [email protected]:/.../mydirectory . Which is quite complicated. My question is basic : how to simplify this using a SSH tunnel ? (and please explain me the signification of each command line you write, to understand what each line really do and to avoid to use it like a magical thing. Furthermore if some ports number are used, explain me if I can pick a completely random number or if I have to choose a specific one.)

    Read the article

  • Process files in a folder that haven't previously been processed

    - by Paul
    I have a series of files in a directory that I need to carry an action out on using a script. Once the action is done, then I want to keep a log that the file has been processed, so that the next time the script is run, it does not attempt to carry out the action again. So lets say I can find all the files that should be processed like this: for i in `find /logfolder -name '20*.log'` ; do process_log $i echo $i >> processedlogsfile done So I have a file containing the logs I have processed, and my goal would be to modify the for loop such that these processed logs are not processed a second time. Doing a manual scan each time seems inefficient, particularly as the processedlogfiles gets bigger: if grep -iq "$i" processdlogfiles ; then continue; fi It would be good if these files could be excluded when setting up the for loop. Note that the OS in question is a linux derivative, a managment appliance, with a limited toolset (no attr command for example) and so no way to install additional utilities (well it is possible but not an option). Most common bash shell commands are available though. Also, the filenames and locations of the processed files must remain where they are - they can't be altered to reflect their processed status

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >