Search Results

Search found 43856 results on 1755 pages for 'header files'.

Page 31/1755 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • How do I parse a header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you can give me some interesting viewpoints for my situation, because I am not satisfied with my current approach. I am writing an MP3 parser, starting with an ID3v2 parser. Right now I`m working on the extended header parsing, my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag. The 2.3 version optional header is defined as follows: struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication? Using two different functions to parse each version sounds less great, using a single function with a different flow for each occasion is similar, any good practices for this kind of issues ? Any tips for avoiding code duplication? Any help would be appreciated.

    Read the article

  • Parsing an header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you could give me some interesting viewpoints for my situation, my ways to approach my issue are not to my liking . I am writing an mp3 parser , starting with an ID3v2 parser . Right now I`m working on the extended header parsing , my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag . The 2.3 version optional header is defined as follows : struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication ? Using two different functions to parse each version sounds less great , using a single function with a different flow for each occasion is similar , any good practices for this kind of issues ? any tips for avoiding code duplication ? anything would be great .

    Read the article

  • Recursive function with for loop python

    - by user134743
    I have a question that should not be too hard but it has been bugging me for a long time. I am trying to write a function that searches in a directory that has different folders for all files that have the extension jpg and which size is bigger than 0. It then should print the sum of the size of the files that are in these categories. What I am doing right now is def myFuntion(myPath, fileSize): for myfile in glob.glob(myPath): if os.path.isdir(myFile): myFunction(myFile, fileSize) if (fnmatch.fnmatch(myFile, '*.jpg')): if (os.path.getsize(myFile) > 1): fileSize = fileSize + os.path.getsize(myFile) print "totalSize: " + str(fileSize) THis is not giving me the right result. It sums the sizes of the files of one directory but it does not keep suming the rest. For example if I have these paths C:/trial/trial1/trial11/pic.jpg C:/trial/trial1/trial11/pic1.jpg C:/trial/trial1/trial11/pic2.jpg and C:/trial/trial2/trial11/pic.jpg C:/trial/trial2/trial11/pic1.jpg C:/trial/trial2/trial11/pic2.jpg I will get the sum of the first three and the the size of the last 3 but I won´t get the size of the 6 together, if that makes sense. Thank you so much for your help!

    Read the article

  • Rate My Script: Finding Flash Files Embedded in Office Files

    - by Shaun Johnson
    Can anyone improve on this? Requires Sysinternals Strings date /T >N:\output.txt net use z: /delete net use z: \\svr-002\rmstudentwork @cd /d "z:\" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xls | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.ppt | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.doc | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xlsx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.pptx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.docx | findstr \.swf >> "N:\output.txt" date /T >>N:\output.txt net use z: /delete /yes >>N:\output.txt net use z: \\svr-003\rmstudentwork "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xls | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.ppt | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.doc | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xlsx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.pptx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.docx | findstr \.swf >> "N:\output.txt" net use z: /delete /yes Basically it mounts a share as a network drive then runs through the share looking for swf files inside office documents.

    Read the article

  • read files from directory and filter files from Java

    - by Adnan
    The following codes goes through all directories and sub-directories and outputs just .java files; import java.io.File; public class DirectoryReader { private static String extension = "none"; private static String fileName; public static void main(String[] args ){ String dir = "C:/tmp"; File aFile = new File(dir); ReadDirectory(aFile); } private static void ReadDirectory(File aFile) { File[] listOfFiles = aFile.listFiles(); if (aFile.isDirectory()) { listOfFiles = aFile.listFiles(); if(listOfFiles!=null) { for(int i=0; i < listOfFiles.length; i++ ) { if (listOfFiles[i].isFile()) { fileName = listOfFiles[i].toString(); int dotPos = fileName.lastIndexOf("."); if (dotPos > 0) { extension = fileName.substring(dotPos); } if (extension.equals(".java")) { System.out.println("FILE:" + listOfFiles[i] ); } } if(listOfFiles[i].isDirectory()) { ReadDirectory(listOfFiles[i]); } } } } } } Is this efficient? What could be done to increase the speed? All ideas are welcome.

    Read the article

  • My files disappeared from the UbuntuOne synced folder

    - by Junji
    I set up an UbuntuOne account on PC1 (Ubuntu 10.10) and the same account on PC2 (Ubuntu 10.04). I did the following: Created file named maverick.txt in PC1's ~/Ubuntu One/log Created file named venus.txt in PC2's ~/Ubuntu One/log Bot files appeared in one.ubuntu.com A few hours later, those two files are disappeared from PC1's Ubuntu One/log PC2's Ubuntu One/log one.ubuntu.com So, my files are gone forever. Why did this happen? Is there any way to recover those files?

    Read the article

  • Media server...serving files...............for a limited time only

    - by Craig
    I’m new to Ubuntu and am seeking help with a media server I have built. I have a couple of HTPCs in my house running XBMC. I wanted to build one for the family room working double duty as a HTPC, and media server to share movies, TV shows, music, etc. on my Windows network. So using some spare old parts I had lying around I decided to go CRAZY and build my first Linux box. I used Ubuntu because it seemed to be the most user friendly variant, especially for people that are new to Linux. I had to do a few things to get the media files shared properly on my network: Made sure my two media drives auto-mount every time I boot the computer by editing the “fstab” file – “sudo nano /etc/fstab” Installed Samba - “sudo apt-get install samba” Set a password for Samba - “sudo smbpasswd –a USERNAME” Edited the Samba configuration file to make sure the computer was in my networks workgroup – “sudo nano /etc/samba/smb.conf” In the file manager (not sure if that’s the right name for it), I right-clicked my media folders and set the sharing and permissions. The sharing was done without guest access, and permissions were set to; Owner, Group, and Others - Access: Create and Delete Files. Adjusted the Power Management settings to never put the system into sleep mode. I checked to see if I had access to the files from a Windows 7 machine and I did (Woo Hoo!). But when I tried to play any of my video files from the Windows machine (using VLC media player), they would only play for about 2-5 minutes and then they would stop with an error message saying that the file could not be accessed (Booo...). I tried playing some files through XBMC running in Windows and they worked for a bit longer (about 10-15mins), but they also stopped playing. I installed the Linux version of XBMC on the server and played the files locally with no problems. It doesn’t seem to be an issue with the files themselves, it seems to be a sharing problem on my network. So my question to the Ubuntu gurus out there is: Did I miss adding/editing something in the Samba configuration file? Did I use the right method to share my media files (file manager vs. using the terminal)? Is it possible for the computer to still go to sleep without the screen going black (does that even make sense?). Are there any special settings in Ubuntu that I should be using since this computer as a media server (is there a media server mode?...!...?). Any help on this matter would be greatly appreciated. Thanks

    Read the article

  • How to identify doc, ppt, xls files

    - by Shelby. S
    So I was wondering how would you differentiate ppt, xls and doc files from each other in linux regardless of extensions. I tried 'file' but from the looks of it, all of MSOffice files are categorized under the same file type. Similarly I'm having trouble with docx, xlsx and pptx files, since they're essentially all zip files containing a bunch of xml. Thank you for your help! P.S. I also tried a python script importing the magic module, but no go.

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • How to remove configuration files completely

    - by Jasper Loy
    Recently I uninstalled some software using sudo apt-get --purge autoremove, thinking that this would remove all traces of it including unused dependencies and configuration files. However I discovered that a configuration file was left behind in my home folder. Is there a more powerful command which would remove even that? As a related question about keeping things clean , is is safe to delete the hidden files and folders under home, if they are merely configuration files, or are there other kinds of files?

    Read the article

  • How do I not show files on the main screen

    - by ChuckMcM
    Ok, so I upgraded to 12.10 and have been trying out Unity, my screen has become a complete mess of folders and files. "Back in the day" the folders that were on my screen were the ones shown in the .Desktop directory now it seems like all the files in my login directory are there (that is a lot of files) Is there some way to set the files being diplayed to come from a specific directory? if so how? I think I've gone through every panel of the system settings application.

    Read the article

  • What exactly is a X-YMailISG header?

    - by iainH
    Finally ... our emails are being seen by Yahoo! not as junk anymore. Hurray! However I notice that the Yahoo! receiving MTA adds in a X-YMailISG header. It's very large ... 2**10 bits? Now that I've invested too large a chunk of my waking life in crafting our email headers I'm curious to know what an X-YMailISG header is. Can anybody tell me? Does it pose any security / authenticity issues? There's very little intelligible from Google results. Background: After many days tweaking TXT records in our domain's DNS zone file for SPF and DKIM, I have at last succeeded in generating email from our Drupal site that Yahoo! no longer marks as X-YahooFilteredBulk and the excellent service [email protected] returns results that show the emails passing SPF, DKIM and Sender-ID checks and appearing to SpamAssassin as ham. Yahoo! even adds a Received-SPF: pass header. Useful links: http://www.goldfisch.at/knowwiki/howtos/dkim-filter http://old.openspf.org/wizard.html Strangely enough the SPF TXT record needed / allowed a blank key / name field in our registrar's DNS management panel whereas the DKIM record needed the {selector}._domainkey as the key /name of the DKIM strings.

    Read the article

  • Get Squid to pass X-Requested-With header

    - by tftd
    I have configured a squid 3.1 proxy server. Everything works great except for the X-Requested-With header. I can't manage to figure out how to pass that header to the site I'm attempting to open via the proxy. This is my current configuration: request_header_access Allow allow all request_header_access Authorization allow all request_header_access WWW-Authenticate allow all request_header_access Proxy-Authorization allow all request_header_access Proxy-Authenticate allow all request_header_access Cache-Control allow all request_header_access Content-Encoding allow all request_header_access Content-Length allow all request_header_access Content-Type allow all request_header_access Date allow all request_header_access Expires allow all request_header_access Host allow all request_header_access If-Modified-Since allow all request_header_access Last-Modified allow all request_header_access Location allow all request_header_access Pragma allow all request_header_access Accept allow all request_header_access Accept-Charset allow all request_header_access Accept-Encoding allow all request_header_access Accept-Language allow all request_header_access Content-Language allow all request_header_access Cookie allow all request_header_access Mime-Version allow all request_header_access Retry-After allow all request_header_access Title allow all request_header_access Connection allow all request_header_access User-Agent allow all request_header_access All deny all #remove all other headers # delete "x-forwarder-for.." headers forwarded_for delete request_header_access Via deny all request_header_access X-Forwarded-For deny all I tried to add this line request_header_access X-Requested-With allow all to the configuration but apparently X-Requested-With is an unknown header name... Apparently I'm missing something?

    Read the article

  • "Delivered-To" Header in Exchange

    - by Kaii
    In some SMTP server implementations (i.e. Postfix) you can enable Delivered-To and X-Original-To headers that will be added to your email. (or [X-]Envelope-To) This is very helpful with distribution lists to determine which e-mail address the mail has been redirected to. So, when the mail has been sent to [email protected], you can see in the Delivered-To or Envelope-To header that it has been redirected (distributed) to [email protected], which is one of many other e-mail addresses that are linked to a single mailbox. How do I find which address was used to deliver this mail to a specific mailbox on Microsoft Exchange 2010? Looking at the plain message (with all headers) i can not find any information that the mail arrived via address [email protected] I think I need the Delivered-To header (or a similar one) to be set on Microsoft Exchange when a mail is delivered via distribution lists. Is there any way to enable such header in Exchange 2010? I need it so that our Ticket system (OTRS) correctly recognizes where the ticket belongs to. Adding all the e-mail addresses of all distribution lists to the system configuration is not the right solution. And if there is a solution for Exchange 2010, is this possibly also applicable to Exchange 2007?

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by user28146
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • Convert .3GP and .3G2 Files to AVI / MPEG for Free

    - by DigitalGeekery
    3GP and .3G2 are common video capture formats used on many mobile phones, but they may not be supported by your favorite media player. Today we’ll show you a quick and easy way to convert those files to AVI or MPG format with the free Windows application, Pazera Free 3GP to AVI Converter. Download the Pazera Free 3GP to AVI Converter. You’ll have to unzip the download folder, but there is no need to install the application. Just double-click the 3gptoavi.exe file to run the application. To add your 3GP or 3G2 files to the queue to be converted, click on the Add files  button at the top left. Browse for your file, and click Open.   Your video will be added to the Queue. You can add multiple files to the queue and convert them all at one time.   Most users will find it preferable to use one of the pre-configured profiles for their conversion settings. To load a profile, choose one from the Profile drop down list and then click the Load button. You will see the profile update the settings in the panels at the bottom of the application. We tested Pazera Free 3GP to AVI Converter with 3GP files recorded on a Motorola Droid, and found the AVI H.264 Very High Q. profile to return the best results for AVI output, and the MPG – DVD NTSC: MPEG-2 the best results for MPG output. Other profiles produced smaller file sizes, but at a cost of reduced quality video output.   More advanced users may tweak video and audio settings to their liking in the lower panels. Click on the AVI button under Output file format / Video settings to adjust settings AVI… Or the MPG button to adjust the settings for MPG output. By default, the converted file will be output to the same location as the input directory. You can change it by clicking the text box input radio button and browsing for a different folder. When you’ve chosen your settings, click Convert to begin the conversion process.   A conversion output box will open and display the progress. When finished, click Close. Now you’re ready to enjoy your video in your favorite media player. Pazera Free 3GP to AVI Converter isn’t the most robust media conversion tool, but it does what it is intended to do. It handles the task of 3GP to AVI / MPG conversion very well. It’s easy enough for the beginner to manage without much trouble, but also has enough options to please more experienced users. Download Pazera Free 3GP to AVI Converter Similar Articles Productive Geek Tips How To Convert Video Files to MP3 with VLCEasily Change Audio File Formats with XRECODEConvert PDF Files to Word Documents and Other FormatsConvert Video and Remove Commercials in Windows 7 Media Center with MCEBuddy 1.1Compress Large Video Files with DivX / Xvid and AutoGK TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • Copy New Files Only in .NET

    - by psheriff
    Recently I had a client that had a need to copy files from one folder to another. However, there was a process that was running that would dump new files into the original folder every minute or so. So, we needed to be able to copy over all the files one time, then also be able to go back a little later and grab just the new files. After looking into the System.IO namespace, none of the classes within here met my needs exactly. Of course I could build it out of the various File and Directory classes, but then I remembered back to my old DOS days (yes, I am that old!). The XCopy command in DOS (or the command prompt for you pure Windows people) is very powerful. One of the options you can pass to this command is to grab only newer files when copying from one folder to another. So instead of writing a ton of code I decided to simply call the XCopy command using the Process class in .NET. The command I needed to run at the command prompt looked like this: XCopy C:\Original\*.* D:\Backup\*.* /q /d /y What this command does is to copy all files from the Original folder on the C drive to the Backup folder on the D drive. The /q option says to do it quitely without repeating all the file names as it copies them. The /d option says to get any newer files it finds in the Original folder that are not in the Backup folder, or any files that have a newer date/time stamp. The /y option will automatically overwrite any existing files without prompting the user to press the "Y" key to overwrite the file. To translate this into code that we can call from our .NET programs, you can write the CopyFiles method presented below. C# using System.Diagnostics public void CopyFiles(string source, string destination){  ProcessStartInfo si = new ProcessStartInfo();  string args = @"{0}\*.* {1}\*.* /q /d /y";   args = string.Format(args, source, destination);   si.FileName = "xcopy";  si.Arguments = args;  Process.Start(si);} VB.NET Imports System.Diagnostics Public Sub CopyFiles(source As String, destination As String)  Dim si As New ProcessStartInfo()  Dim args As String = "{0}\*.* {1}\*.* /q /d /y"   args = String.Format(args, source, destination)   si.FileName = "xcopy"  si.Arguments = args  Process.Start(si)End Sub The CopyFiles method first creates a ProcessStartInfo object. This object is where you fill in name of the command you wish to run and also the arguments that you wish to pass to the command. I created a string with the arguments then filled in the source and destination folders using the string.Format() method. Finally you call the Start method of the Process class passing in the ProcessStartInfo object. That's all there is to calling any command in the operating system. Very simple, and much less code than it would have taken had I coded it using the various File and Directory classes. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >