Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 174/2058 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Uncompress OpenOffice files for better storage in version control

    - by Craig McQueen
    I've heard discussion about how OpenOffice (ODF) files are compressed zip files of XML and other data. So making a tiny change to the file can potentially totally change the data, so delta compression doesn't work well in version control systems. I've done basic testing on an OpenOffice file, unzipping it and then rezipping it with zero compression. I used the Linux zip utility for my testing. OpenOffice will still happily open it. So I'm wondering if it's worth developing a small utility to run on ODF files each time just before I commit to version control. Any thoughts on this idea? Possible better alternatives? Secondly, what would be a good and robust way to implement this little utility? Bash shell that calls zip (probably Linux only)? Python? Any gotchas you can think of? Obviously I don't want to accidentally mangle a file, and there are several ways that could happen. Possible gotchas I can think of: Insufficient disk space Some other permissions issue that prevents writing the file or temporary files ODF document is encrypted (probably should just leave these alone; the encryption probably also causes large file changes and thus prevents efficient delta compression)

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • On Windows 7: Same path but Explorer & Java see different files than Powershell

    - by Ukko
    Submitted for your approval, a story about a poor little java process trapped in the twilight zone... Before I throw up my hands and just say that the NTFS partition is borked is there any rational explanation of what I am seeing. I have a file with a path like this C:\Program Files\Company\product\config\file.xml I am reading this file after an upgrade and seeing something wonky. Eclipse and my Java app are still seeing the old version of this file while some other programs see the new version. The test that convinced my it was not my fat finger that caused the problem was this: In Explorer I entered the above path and Explorer displayed the old version of the file. Forcing Explorer to reload via Ctrl-F5 still yields the old version. This is the behavior I get in Java. Now in PowerShell I enter more "C:\Program Files\Company\product\config\file.xml" I cut and past the path from Explorer to make sure I am not screwing anything up and it shows me the new version of the file. So for the programming aspect of this, is there a cache or some system component that would be storing this stale reference. Am I responsible for checking or reseting that for some class of files. I can imagine somebody being "creative" in how xml files are processed to provide some bell or whistle. But it could be a case of just being borked. Any insights appreciated...Thanks!

    Read the article

  • Website --> Add Reference also creates files with pdb extensions

    - by SourceC
    Hello, Q1 Any assemblies stored in Bin directory will automatically be referenced by web application. We can add assembly reference via Website -- Add Reference or simply by copying dll into Bin folder. But I noticed that when we add reference via Website -- Add Reference, that additional files with .pdb extension are placed inside Bin. If these files are also needed, then why does reference still work even if we only place referenced dll into Bin, but not pdb files Q2 It appears that if you add a new item to web project, this class will automatically be added to project list and we can reference it from all pages in this project. So are all files added to project list automatically being referenced ? thanx EDIT: On your second question, you are adding a public class to a namespace so will be visible to other classes in that project and in that namespace. I don’t know much about assemblies, but I’d assume the reason why item( class ) added to the project is visible to other classes in that project is for the simple fact that in web project all classes get compiled into single assembly and for the fact that public classes contained in the same assembly are always visible to each other? much appreciated

    Read the article

  • WPD on XP, Vista, and 7 (need to transfer photo and video files)

    - by Bradley Dean
    I need to transfer files (still photos and videos) from any portable device that a user may connect (still camera, video camera, mobile phone, etc.) I don't need to worry about plain storage devices as these have drive letters. And I only care about existing files, I don't care about live video, preview video, taking new pictures, etc. I originally tried WIA, which works great except it can not transfer video files. So then I tried WPD, following along with dimeby8's tutorial: http://blogs.msdn.com/b/dimeby8/archive/2006/09/27/774259.aspx I haven't gotten the transfer working yet (I'm converting it over to C#), but I can at least see the device and enumerate the files in Win7. In XP I get nothing. It's pointed out in this thread that WPD won't enumerate devices on XP (see Lisa O [MSFT]'s post): http://social.msdn.microsoft.com/Forums/en/windowssdk/thread/56459945-b757-45df-8c9f-4ebdbbb18a2c So WIA is out because it won't do video. And WPD is out because it won't do XP. Has anyone gotten this to work? Am I missing something simple here? Thanks.

    Read the article

  • ZIP Numerous Blob Files

    - by Michael
    I have a database table that contains numerous PDF blob files. I am attempting to combine all of the files into a single ZIP file that I can download and then print. Please help! <?php include 'config.php'; include 'connect.php'; $session= $_GET[session]; $query = " SELECT $tbl_uploads.username, $tbl_uploads.description, $tbl_uploads.type, $tbl_uploads.size, $tbl_uploads.content, $tbl_members.session FROM $tbl_uploads LEFT JOIN $tbl_members ON $tbl_uploads.username = $tbl_members.username WHERE $tbl_members.session= '$session'"; $result = mysql_query($query) or die('Error, query failed'); while(list($username, $description, $type, $size, $content) = mysql_fetch_array($result)) { header("Content-length: $size"); header("Content-type: $type"); header("Content-Disposition: inline; filename=$username-$description.pdf"); echo $content; } $files = array('File 1 from database', 'File 2 from database'); $zip = new ZipArchive; $zip->open('file.zip', ZipArchive::CREATE); foreach ($files as $file) { $zip->addFile($file); } $zip->close(); header('Content-Type: application/zip'); header('Content-disposition: attachment; filename=filename.zip'); header('Content-Length: ' . filesize($zipfilename)); readfile($zipname); mysql_close($link); exit; ?>

    Read the article

  • Moving a large number of files in one directory to multiple directories

    - by Axsuul
    I am looking to create a Windows batch script to move about 2,000 files and splitting them up so that there are 10 files per folder. I have attempted creating a batch script but the syntax really boggles my mind. Here is what I have so far @echo off :: Config parameters set /a groupsize = 10 :: initial counter, everytime counter is 1, we create new folder set /a n = 1 :: folder counter set /a nf = 1 for %%f in (*.txt) do ( :: if counter is 1, create new folder if %n% == 1 ( md folder%nf% set /a n += 1 ) :: move file into folder mv -Y %%f folder%nf%\%%f :: reset counter if larger than group size if %n% == %groupsize% ( set /a n = 1 ) else ( set /a n += 1 ) ) pause Basically what this script does is loop through each .txt file in the directory. It creates a new directory in the beginning and moves 10 files into that directory, then creates a new folder again and moves another 10 files into that directory, and so on. However, I'm having problems where the n variable is not being incremented in the loop? I'm sure there's other errors too since the CMD window closes on me even with pause. Any help or guidance is appreciated, thanks for your time!

    Read the article

  • Missing files on branch after cvs2svn import

    - by cafebabe
    A colleague has imported a CVS repository into a pre-existing SVN repository using a cvs2svn dumpfile (like "svnadmin load --parent-dir /path < dumpfile") , which I originally created from the CVS repo. Now that I'm trying to checkout and build from SVN, I've noticed that some files seem to be missing in the SVN checkout that were present when I checked out the same branch from CVS, although the majority are present. They are mostly but not exclusively binary files (jars and gifs etc.) and I think (though I haven't checked exhaustively) that they are also files that have not been modified on the branch that I'm trying to check out. I should also point out that they don't show up using cvsweb (I would provide a link to the cvsweb documentation but I have no way of knowing its version etc), although they do appear doing a standard checkout of the branch. If anyone has any idea what's wrong here, or where to start looking to address this, I'd be very grateful! New to SVN so not sure if this is normal! Also, I know I could fairly easily "fix" it by copying over the files but I'd ideally like to keep their revision history so a more complete solution would be preferable. Thanks!

    Read the article

  • Visual Studio opening .xml files in Notepad

    - by Portman
    So I'm happily working on a project making heavy use of custom .xml configuration files this morning. All of a sudden, whenever I double-click an .xml file in Solution Explorer, it opens in Notepad instead of within Visual Studio. Thinking that it was the Windows file associations, I right-clicked on a file in Explorer, selected Open With Choose Defaults, and selected Visual Studio 2008. But the problem remains -- now when I open a file from Explorer, Visual Studio Opens, then it opens Notepad. Needless to say, this is very frustrating, and Google is not much help. Has anyone else ever had this problem, and what did you do about it? Notes: This only happens for .xml files. Other text files (.config, .txt) open within Visual Studio just fine. This has nothing to do with Windows file associations, as Windows open up VS2008 just as it should. This is some crazy problem internal to Visual Studio. I've also tried Tools Options General Restore File Associations. No luck. Nothing present in Tools Options Text Editor File Extension This is what my "Open With" menu looks like for .xml files. As you can see, "XML Editor" is set to the default.

    Read the article

  • IO operation taking long time for files in remote server

    - by user841311
    I have files of size 150 MB each in a remote server in a different domain in the network. I am accessing them thorugh UNC path. I want to read the file content and perform a basic string search. When I try reading the files line by line, the operation just don't finish and takes long time, more than 30 minutes. However when I copy those files to my local machine, the same code reads and performs the string search in less than 5 seconds. I don't have .NET framework installed in the server so I have to do this from my machine. I want to perform all this through C# code in .NET framework 3.5 so I don't want to explictly ftp all the files to my machine before performing this operation. Sample Code DirectoryInfo dir = new DirectoryInfo(@strFilePath); FileInfo[] fiArray = dir.getFiles("*.txt"); foreach (FileInfo fi in fiArray) { //reading file content from server takes long time but fast in local machine //perform string search } Let me know if my requirement is not clear. Thanks in advance!

    Read the article

  • Created files on Archos 5 invisible on Windows Xp

    - by user352042
    I am fairly new to Android and this is my first post so I apologise in advance if I am breaking protocol or posting to the wrong board. Please feel free to move this post to somewhere more appropriate if required. I am developing for the 160 Gb Archos 5 Internet tablet. Not ideal as a development platform I know, but customer requirements mean we have no choice. It is running Android 1.6. I have updated the device firmware to the most recent available. Updating the version of Android is not an option at this point. Part of my app's requirement is to write information out to .txt files on the external storage directory so that these can be copied over the USB connection to a Windows XP PC using the Mobile media device (MTP) mode. I have followed all instructions I have come across carefully, eg I check that the storage is available using the technique described at http://developer.android.com/guide/topics/data/data-storage.html#filesExternal. However, althoug the files are created succesfully on the device (I can browse them and open them using the device's File Explorer - they are fine), when I connect the device to a Windows XP computer none of the directories or files I created appear and the size of their parent files suggest they do not exist. I have tried running over the ADB, checked logcat, tried a (signed) release version and even written a second test application which just creates a folder (this behaves the same, ie it creates the folder but this is not visible in Windows Explorer) - nothing anywhere gives me any suggestion as to what the problem might be. If anyone has heard of this before or has any ideas as to what else | could try to fix it please get in touch! We do not have any other devices to test on at the moment, although I hope to remedy this soon, customer permitting.

    Read the article

  • Stage untracked files for commit without staging tracked file changes

    - by Blair Holloway
    Oftentimes, when using git, I find myself in this situation: I have changes to several files, but I only want to commit parts of them. I have added several untracked files, which I want to track and commit. Solving the first part is easy; I run: git add -p Then, I choose which hunks to stage, and which hunks remain in my working tree, but unstaged. However, git's patch mode skips over untracked files. What I would like to do is something like: git add --untracked But no such option appears to exist. If I have, say, six untracked files, I could stage them using add in interactive mode and the add untracked option, like so: git add -i a<CR> 1<CR> 2<CR> 3<CR> 4<CR> 5<CR> 6<CR> <CR> q<CR> I feel like there is, or should be, a quicker way of doing this, though. What am I missing?

    Read the article

  • Specifying both multiple targets and multiple build files with ant or subant in 1.6

    - by Paul Marshall
    I'm trying to unify a build process, running one build to get multiple packages. My first shot at this is just having a central build script call <ant or <subant on each project's build.xml file. I'm using Ant 1.6, and I've run into a funny problem: either I use the <ant task, and I can specify multiple targets but not multiple build files, or I use the <subant task, and I can specify multiple build files but not multiple targets. I realize there's a few solutions here already: Just upgrade to Ant 1.7 already; <antcall can do multiple targets there. Edit the separate project build files to have a variety of top-level targets, so I can call each individual file with just one target, and use <antcall. Copy-paste a lot of <ant tasks, with a little help from <macrodef to help the sanity. Is there something I've missed, that will allow me to do what I want from this single central build.xml without a) editing individual project files, b) writing lots of repetitive code, or c) upgrading Ant, and that d) doesn't require editing every time I add a new project?

    Read the article

  • Problem with filefield module after migrating drupal site to a new server: cant upload files

    - by oalo
    We have a content type with two imagefield / filefield fields, and after migrating our site to a new server, we have the following problem: When we submit a new item for this content type, with two images for those fields, drupal gives us the following error and does not upload the images: warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. An image thumbnail was not able to be created. warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. An image thumbnail was not able to be created. I understand this is a permissions error, but it is not clear to me where do I have to change permissions. Line 349 of file.inc has the following code: if (($fp = fopen("$directory/.htaccess", 'w')) && fputs($fp, $htaccess_lines)) { fclose($fp); chmod($directory .'/.htaccess', 0664); } else { $repl = array('%directory' = $directory, '!htaccess' = nl2br(check_plain($htaccess_lines))); form_set_error($form_item, t("Security warning: Couldn't write .htaccess file. Please create a .htaccess file in your %directory directory which contains the following lines:!htaccess", $repl));

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • Installing a group of .deb files in Ubuntu

    - by p00ya
    Hi. I have a directory of .deb files which I copied from the cache folder of apt. There are many applications and Ubuntu updates among them. but there's no dependency failure because they were all downloaded by 'add/remove applications' and 'update manager' automatically. Now I have installed the same version of Ubuntu (9.04) and I want to install those apps and updates again(though they are not new versions). In other words, I want to make this fresh Ubuntu install exactly like the old one but without downloading any thing and using only those .deb files that I copied. All I have is an archive folder containing the .deb files and a 'pkgcache.bin' file. I know I can double-click the .deb files and install them manually but then I have to find out and follow the dependencies one by one from the installer errors. I have also tried adding an offline repository but it didn't work. I think because all of my .deb's are in on folder, and there is no separate 'main', 'restricted', ... folder?! Is there a way to do all of this automatically? thanks

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • Git add not working with .png files?

    - by D Lawson
    I have a dirty working tree, dirty because I made changes to source files and touched up some images. I was trying to add just the images to the index, so I ran this command: git add *.png But, this doesn't add the files. There were a few new image files that were added, but none of the ones that were modified/pre-existing were added. What gives? Edit: Here is some relevant terminal output $ git status # On branch master # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_male.png # modified: src/main/resources/icons/doctor_female.png # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # src/main/resources/icons/arrow_up.png # src/main/resources/icons/bullet_arrow_down.png # src/main/resources/icons/bullet_arrow_up.png no changes added to commit (use "git add" and/or "git commit -a") Then executed "git add *.png" (no output after command) Then: $ git status # On branch master # # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file: src/main/resources/icons/arrow_up.png # new file: src/main/resources/icons/bullet_arrow_down.png # new file: src/main/resources/icons/bullet_arrow_up.png # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_female.png # modified: src/main/resources/icons/doctor_edit_male.png

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Python: problem with tiny script to delete files

    - by Rosarch
    I have a project that used to be under SVN, but now I'm moving it to Mercurial, so I want to clear out all the .svn files. It's a little too big to do it by hand, so I decided to write a python script to do it. But it isn't working. def cleandir(dir_path): print "Cleaning %s\n" % dir_path toDelete = [] files = os.listdir(dir_path) for filedir in files: print "considering %s" % filedir # continue if filedir == '.' or filedir == '..': print "skipping %s" % filedir continue path = dir_path + os.sep + filedir if os.path.isdir(path): cleandir(path) else: print "not dir: %s" % path if 'svn' in filedir: toDelete.append(path) print "Files to be deleted:" for candidate in toDelete: print candidate print "Delete all? [y|n]" choice = raw_input() if choice == 'y': for filedir in toDelete: if os.path.isdir(filedir): os.rmdir(filedir) else: os.unlink(filedir) exit() if __name__ == "__main__": cleandir(dir) The print statements show that it's only "considering" the filedirs whose names start with ".". However, if I uncomment the continue statement, all the filedirs are "considered". Why is this? Or is there some other utility that already exists to recursively de-SVN-ify a directory tree?

    Read the article

  • How to handle too many files in Qt

    - by mree
    I'm not sure how to ask this, but here goes the question: I'm migrating from J2SE to Qt. After creating some small applications in Qt, I noticed that I've created way too many files compared to what I would've create if I was developing in Java (I use Netbeans). For an example, for a GUI to Orders, I'd have to create Main Order Search Window Edit Order Dialog Manage Order Dialog Maybe some other dialogs... For Java, I don't have to create a new file for every new Dialog, the Dialog will be created in the JFrame class itself. So, I will only be seeing 1 file for Orders which has other Dialogs in it. However, in Qt, I'd have to create 1 ui file, 1 header file, 1 cpp file for each of the Dialog (I know I can just put the cpp in the header, but it's easier to view codes in seperate files). So, in the end, I might end up with 3 (if there are 3 dialogs) x3 files = 9 files for the GUI in Qt, compared to Java which is only 1 file. I do know that I can create a GUI by coding it manually. But it seems easy on small GUIs but not some on complicated GUIs with lots of inputs, tabs and etc. So, is there any suggestion on how to minimize the file created in Qt?

    Read the article

  • download large files using servlet

    - by niks
    I am using Apache Tomcat Server 6 and Java 1.6 and am trying to write large mp3 files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it and getting socket exception broken pipe. File fileMp3 = new File(objDownloadSong.getStrSongFolder() + "/" + strSongIdName); FileInputStream fis = new FileInputStream(fileMp3); response.setContentType("audio/mpeg"); response.setHeader("Content-Disposition", "attachment; filename=\"" + strSongName + ".mp3\";"); response.setContentLength((int) fileMp3.length()); OutputStream os = response.getOutputStream(); try { int byteRead = 0; while ((byteRead = fis.read()) != -1) { os.write(byteRead); } os.flush(); } catch (Exception excp) { downloadComplete = "-1"; excp.printStackTrace(); } finally { os.close(); fis.close(); }

    Read the article

  • Process a set of files from a source directory to a destination directory in Python

    - by Spoike
    Being completely new in python I'm trying to run a command over a set of files in python. The command requires both source and destination file (I'm actually using imagemagick convert as in the example below). I can supply both source and destination directories, however I can't figure out how to easily retain the directory structure from the source to the destination directory. E.g. say the srcdir contains the following: srcdir/ file1 file3 dir1/ file1 file2 Then I want the program to create the following destination files on destdir: destdir/file1, destdir/file3, destdir/dir1/file1 and destdir/dir1/file2 So far this is what I came up with: import os from subprocess import call srcdir = os.curdir # just use the current directory destdir = 'path/to/destination' for root, dirs, files in os.walk(srcdir): for filename in files: sourceFile = os.path.join(root, filename) destFile = '???' cmd = "convert %s -resize 50%% %s" % (sourceFile, destFile) call(cmd, shell=True) The walk method doesn't directly provide what directory the file is under srcdir other than concatenating the root directory string with the file name. Is there some easy way to get the destination file, or do I have to do some string manipulation in order to do this?

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >