Search Results

Search found 39200 results on 1568 pages for 'zip files'.

Page 127/1568 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • summing up excel files in matlab

    - by AP
    Is there a easy good way to sum up various excel files in matlab? what i really want is similar to dos command type file*.xls sumfile.xls I have from 10-100 excel files with similar file name formats excet the date XXXXX_2010_03_03.xls, XXXXX_2010_03_03.xls and so on..... Is there a command to copy the files one after other. All files are of diff length so i cannot know the position of the rows after each file. I would like to have them copied in same sheet of excel. Thanks

    Read the article

  • including files without having to specify $_SERVER['DOCUMENT_ROOT']

    - by pâtissier
    Hi... Before the PHP Version update I used to be able to include files as following without specifying the document root: <?php include '/absolute/path/to/files/file1.php'; ?> However I now have to include the same file as following: <?php include $_SERVER['DOCUMENT_ROOT'].'/absolute/path/to/files/file1.php'; ?> What php.ini setting could have overridden the former behaviour?

    Read the article

  • Downloading files from an http server in python

    - by cellular
    Using urllib2, we can get the http response from a web server. If that server simply holds a list of files, we could parse through the files and download each individually. However, I'm not sure what the easiest, most pythonic way to parse through the files would be. When you get a whole http response of the generic file server list, through urllib2's urlopen() method, how can we neatly download each file?

    Read the article

  • Issue with Administratively Assigned Offline Files

    - by ZnewmaN
    I need to use Administratively Assigned Offline files in conjunction with folder redirection, but user home folders live on 26 different shares. Do I just need to add 52 file paths similar to: \\server\shareA\%username%\Desktop \\server\shareA\%username%\My Documents \\server\shareB\%username%\Desktop \\server\shareB\%username%\My Documents ... and so on? Or do I need to create 26 GPOs, one for each share; or is there an easier way to do it? Edit: The solution provided by @berniewhite in the comments of using %homeshare% has resolved the issue and Administratively Assigned Offline Files is now working well.

    Read the article

  • Deleting files associated with model - django

    - by alexBrand
    I have the following code in one of my models class PostImage(models.Model): post = models.ForeignKey(Post, related_name="images") # @@@@ figure out a way to have image folders per user... image = models.ImageField(upload_to='images') image_infowindow = models.ImageField(upload_to='images') image_thumb = models.ImageField(upload_to='images') image_web = models.ImageField(upload_to='images') description = models.CharField(max_length=100) order = models.IntegerField(null=True) IMAGE_SIZES = { 'image_infowindow':(70,70), 'image_thumb':(100,100), 'image_web':(640,480), } def delete(self, *args, **kwargs): # delete files.. self.image.delete(save=False) self.image_thumb.delete(save=False) self.image_web.delete(save=False) self.image_infowindow.delete(save=False) super(PostImage, self).delete(*args, **kwargs) I am trying to delete the files when the delete() method is called on PostImage. However, the files are not being removed. As you can see, I am overriding the delete() method, and deleting each ImageField. For some reason however, the files are not being removed.

    Read the article

  • How to find files older than N days from a given timestamp

    - by JGeZau
    I want to find files older than N days from a given timestamp in format YYYYMMDDHH I can find file older than 2 days with the below command, but this finds files with present time find /path/to/dir -mtime -2 -type f -ls Lets say I give the input timeSamp=2011093009 so I want to find files older than 2 days from 2011093009 Been doing my research, but can't seem to figure it out. ========================================== Found the solution...see below for my Answer.. Thanks

    Read the article

  • Member classes versus #includes

    - by ShallowThoughts
    I've recently discovered that it is bad form to have #includes in your header files because anyone who uses your code gets all those extra includes they won't necessarily want. However, for classes that have member variables defined as a type of another class, what's the alternative? For example, I was doing things the following way for the longest time: /* Header file for class myGrades */ #include <vector> //bad #include "classResult.h" //bad class myGrades { vector<classResult> grades; int average; int bestScore; } (Please excuse the fact that this is a highly artificial example) So, if I want to get rid of the #include lines, is there any way I can keep the vector or do I have to approach programming my code in an entirely different way?

    Read the article

  • .net File.Copy very slow when copying many small files (not over network)

    - by Guavaman
    I'm making a simple folder sync backup tool for myself and ran into quite a roadblock using File.Copy. Doing tests copying a folder of ~44,000 small files (Windows mail folders) to another drive in my system, I found that using File.Copy was over 3x slower than using a command line and running xcopy to copy the same files/folders. My C# version takes over 16+ minutes to copy the files, whereas xcopy takes only 5 minutes. I've tried searching for help on this topic, but all I find is people complaining about slow file copying of large files over a network. This is neither a large file problem nor a network copying problem. I found an interesting article about a better File.Copy replacement, but the code as posted has some errors which causes problems with the stack and I am nowhere near knowledgeable enough to fix the problems in his code. Are there any common or easy ways to replace File.Copy with something more speedy?

    Read the article

  • BASH statements execute alone but return "no such file" in for loop.

    - by reve_etrange
    Another one I can't find an answer for, and it feels like I've gone mad. I have a BASH script using a for loop to run a complex command (many protein sequence alignments) on a lot of files (~5000). The loop produces statements that will execute when given alone (i.e. copy-pasted from the error message to the command prompt), but which return "no such file or directory" inside the loop. Script below; there are actually several more arguments but this includes some representative ones and the file arguments. #!/bin/bash # Pass directory with targets as FASTA sequences as argument. # Arguments to psiblast # Common db=local/db/nr/nr outfile="/mnt/scratch/psi-blast" e=0.001 threads=8 itnum=5 pssm="/mnt/scratch/psi-blast/pssm." pssm_txt="/mnt/scratch/psi-blast/pssm." pseudo=0 pwa_inclusion=0.002 for i in ${1}/* do filename=$(basename $i) "local/ncbi-blast-2.2.23+/bin/psiblast\ -query ${i}\ -db $db\ -out ${outfile}/${filename}.out\ -evalue $e\ -num_threads $threads\ -num_iterations $itnum\ -out_pssm ${pssm}$filename\ -out_ascii_pssm ${pssm_txt}${filename}.txt\ -pseudocount $pseudo\ -inclusion_ethresh $pwa_inclusion" done Running this scripts gives "<scriptname> line <last line before 'done'>: <attempted command> : No such file or directory. If I then paste the attempted command onto the prompt it will run. Each of these commands takes a couple of minutes to run.

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Declaring struct in header file

    - by wrongusername
    I've been trying to include a structure called "student" in a student.h file, but I'm not quite sure how to do it. My student.h file code consists of entirely: #include<string> using namespace std; struct Student; while the student.cpp file consists of entirely: #include<string> using namespace std; struct Student { string lastName, firstName; //long list of other strings... just strings though }; Unfortunately, files that use #include "student.h" come up with numerous errors like error C2027: use of undefined type 'Student', error C2079: 'newStudent' uses undefined struct 'Student' (where newStudent is a function with a Student parameter), and error C2228: left of '.lastName' must have class/struct/union. It appears the compiler (VC++) does not recognize struct Student from "student.h" or something? I have tried declaring the whole struct in "student.h", but it didn't help either. How can I declare struct Student in "student.h" so that I can just #include "student.h" and start using the struct? BTW, it seems there are no compiler errors in student.h...

    Read the article

  • Not able to use 7-Zip to compress stdin and output with stdout?

    - by acidzombie24
    I get the error "Not implemented". I want to compress a file using 7-Zip via stdin then take the data via stdout and do more conversions with my application. In the man page it shows this example: % echo foo | 7z a dummy -tgzip -si -so /dev/null I am using Windows and C#. Results: 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Creating archive StdOut System error: Not implemented Code: public static byte[] a7zipBuf(byte[] b) { string line; var p = new Process(); line = string.Format("a dummy -t7z -si -so "); p.StartInfo.Arguments = line; p.StartInfo.FileName = @"C:\Program Files\7-Zip\7z.exe"; p.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardError = true; p.StartInfo.RedirectStandardInput = true; p.Start(); p.StandardInput.BaseStream.Write(b, 0, b.Length); p.StandardInput.Close(); Console.Write(p.StandardError.ReadToEnd()); //Console.Write(p.StandardOutput.ReadToEnd()); return p.StandardOutput.BaseStream.ReadFully(); } Is there another simple way to read the file into memory? Right now I can 1) write to a temporary file and read (easy and can copy/paste some code) 2) use a file pipe (medium? I have never done it) 3) Something else.

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • Control pdb file output from build defintion file

    - by Urvi
    Hello, I am trying to generate a release build with no pdb files generated. I have seen numerous posts that suggest right-clicking on the project, selecting Properties, going to the Build tab and then to the Advanced... butoon and changing Debug Info to none. This works and all, but I need to do this for a build of ~50 solutions which contain ~25 projects each! Other posts mention editing the appropriate .csproj file, but again, with so many projects, this would take a long time. Is there any way to achieve this via the TFSBuild.proj file? I have tried adding the following to the TFSBuild.proj file, with no luck. <PropertyGroup> <Configuration>Release</Configuration> <Platform>AnyCPU</Platform> </PropertyGroup> <PropertyGroup> <DebugSymbols>false</DebugSymbols> <DebugType>none</DebugType> <Optimize>true</Optimize> </PropertyGroup> The following line prints out Release|AnyCPU, none, and false, but I still see .pdb file in the $(OutputDir) folder. <Message Text="$Configuration|Platform): $(Configuration)|$(Platform)" /> <Message Text="DebugType is: $(DebugType)"/> <Message Text="DebugSymbols is: $(DebugSymbols)"/> Thanks in advance, Urvi

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • Opening a Unicode file with Perl

    - by Jaco Pretorius
    I'm using osql to run several sql scripts against a database and then I need to look at the results file to check if any errors occurred. The problem is that perl doesn't seem to like the fact that the results files are unicode. I wrote a little test script to test it and the output comes out all warbled. $file = shift; open OUTPUT, $file or die "Can't open $file: $!\n"; while (<OUTPUT>) { print $_; if (/Invalid|invalid|Cannot|cannot/) { push(@invalids, $file); print "invalid file - $inputfile - schedule for retry\n"; last; } } Any ideas? I've tried decoding using decode_utf8 but it makes no difference. I've also tried to set the encoding when opening the file. I think the problem might be that osql puts the result file in UTF-16 format, but I'm not sure. When I open the file in textpad it just tells me 'Unicode'. Edit: Using perl v5.8.8

    Read the article

  • Does SetFileBandwidthReservation affect memory-mapped file performance?

    - by Ghostrider
    Does this function affect Memory-mapped file performance? Here's the problem I need to solve: I have two applications competing for disk access: "reader" and "updater". Whole system runs on Windows Server 2008 R2 x64 "Updater" constantly accesses disk in a linear manner, updating data. They system is set up in such a way that updater always has infinite data to update. Consider that it is constantly approximating a solution of a huge set of equations that takes up entire 2TB disk drive. Updater uses ReadFile and WriteFile to process data in a linear fashion. "Reader" is occasionally invoked by user to get some pieces of data. Usually user would read several 4kb blocks from the drive and stop. Occasionally user needs to read up to 100mb sequentially. In exceptional cases up to several gigabytes. Reader maps files to memory to get data it needs. What I would like to achieve is for "reader" to have absolute priority so that "updater" would completely stop if needed so that "reader" could get the data user needs ASAP. Is this problem solvable by using SetPriorityClass and SetFileBandwidthReservation calls? I would really hate to put synchronization login in "reader" and "updater" and rather have the OS take care of priorities.

    Read the article

  • Changing where a resource is pulled during runtime?

    - by Brandon
    I have a website that goes out to multiple clients. Sometimes a client will insist on minor changes. For reasons beyond my control, I have to comply no matter how minor the request. Usually this isn't a problem, I would just create a client specific version of the user control or page and overwrite the default one during build time or make a configuration setting to handle it. Now that I am localizing the site, I'm curious about the best way to go about making minor wording changes. Lets say I have a resource file called Resources.resx that has 300 resources in it. It has a resource called Continue. English value is "Continue", the French value is "Continuez". Now one client, for whatever reason, wants it to say "Next" and "Après" and the others want to keep it the same. What is the best way to accomodate a request like this? (This is just a simple example). The only two ways I can think of is to Create another Resources.resx specific to the client, and replace the .dll during build time. Since I'd be completely replacing the dll, the new resource file would have to contain all 300 strings. The obvious problem being that I now have 2 resource files, each with 300 strings to maintain. Create a custom user control/page and change it to use a custom resource file. e.g. SignIn.ascx would be replaced during the build and it would pull its resources from ClientName.resx instead of Resources.resx. Are there any other things I could try? Is there any way to change it so that the application will always look in a ClientResources.resx file for the overridden values before actually look at the specified resource file?

    Read the article

  • With Maven2, how would I zip a directory in my resources?

    - by Benny
    I'm trying to upgrade my project from using Maven 1 to Maven 2, but I'm having a problem with one of the goals. I have the following Maven 1 goal in my maven.xml file: <goal name="jar:init"> <ant:delete file="${basedir}/src/installpack.zip"/> <ant:zip destfile="${basedir}/src/installpack.zip" basedir="${basedir}/src/installpack" /> <copy todir="${destination.location}"> <fileset dir="${source.location}"> <exclude name="installpack/**"/> </fileset> </copy> </goal> I'm unable to find a way to do this in Maven2, however. Right now, the installpack directory is in the resources directory of my standard Maven2 directory structure, which works well since it just gets copied over. I need it to be zipped though. I found this page on creating ant plugins: http://maven.apache.org/guides/plugin/guide-ant-plugin-development.html. It looks like I could create my own ant plugin to do what I need. I was just wondering if there was a way to do it using only Maven2. Any suggestions would be much appreciated. Thanks, B.J.

    Read the article

  • File.Replace throwing IOException

    - by WebDevHobo
    I have an app that can make modify images. In some cases, this makes the filesize smaller, in some cases bigger. The program doesn't have an option to "not replace the file if result has a bigger filesize". So I wrote a little C# app to try and solve this. Instead of overwriting the files, I make the app write the result to a folder under the current one and name that folder Test. The C# app I wrote compares grabs the contents of both folders and puts the full path to the file(s) in two List objects. I then compare and replace. The replacing isn't working however. I get the following IOException: Unable to remove the file to be replaced The location is on an external hard-drive, on which I have full rights. Now, I know I can just do File.Delete and File.Move in that order, but this exception has gotten me interested in why this particular setup wont work. Here's the source code: http://pastebin.com/4Vq82Umu And yes, the file specified as last argument of the Replace function does exist.

    Read the article

  • Proper QUuid usage in Qt ? (7-Zip DLL usage problems (QLibrary, QUuid GUID conversion, interfaces))

    - by whipsnap
    Hi, I'm trying to write a program that would use 7-Zip DLL for reading files from inside archive files (7z, zip etc). Here's where I'm so far: #include QtCore/QCoreApplication #include QLibrary #include QUuid #include iostream using namespace std; #include "7z910/CPP/7zip/Archive/IArchive.h" #include "7z910/CPP/7zip/IStream.h" #include "MyCom.h" // {23170F69-40C1-278A-1000-000110070000} QUuid CLSID_CFormat7z(0x23170F69, 0x40C1, 0x278A, 0x10, 0x00, 0x00, 0x01, 0x10, 0x07, 0x00, 0x00); typedef int (*CreateObjectFunc)( const GUID *clsID, const GUID *interfaceID, void **outObject); void readFileInArchive() { QLibrary myLib("7z.dll"); CreateObjectFunc myFunction = (CreateObjectFunc)myLib.resolve("CreateObject"); if (myFunction == 0) { cout outArchive; myFunction(&CLSID_CFormat7z, &IID_IOutArchive, (void **)&outArchive); } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); readFileInArchive(); return a.exec(); } Trying to build that in Qt Creator will lead to following error: cannot convert 'QUuid*' to 'const GUID*' in argument passing How should QUuid be correctly used in this context? Also, being a C++ and Qt newbie I haven't yet quite grasped templates or interfaces, so overall I'm having trouble getting through these first steps. If someone could give tips or even example code on how for example an image file could be extracted from ZIP file (to be shown in Qt GUI later on*), I would highly appreciate that. My main goal at the moment is to write a program with GUI for selecting archive files containing image files (PNG, JPG etc) and displaying those files one at a time in the GUI. A Qt based CDisplayEx in short.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • Problem with unlink() in php!

    - by Holicreature
    I'm creating a temp image always named 1.png under specific folder and once i read the image_contents and process, i use unlink() to delete that specific image from that folder. But sometimes the image file is not deleted and the same image is file is read and processed. That script is working otherwise fine... There is no permission related issues , as the files are deleted sometimes... Will there be any issue when the script is repeatedly called and the image with the name is already present and not deleted etc.. ??? Please suggest me what would be the problem extension_loaded('ffmpeg'); $max_width = 120; $max_height = 72; $path ="/home/fff99/public_html/temp/"; ..... ..... $nname = "/home/friend99/public_html/temp/".$imgname; $fileo = fopen($nname,"rb"); if($fileo) { $imgData = addslashes(file_get_contents($nname)); .... ... .. } unlink('$nname');

    Read the article

  • Any board-like platform but only for files instead of text posts? [on hold]

    - by Janwillhaus
    I am looking for a (best open-source or free but also commercially available) CMS platform that is built similarly to bulletin boards (for example phpBB) but instead of text posts, registered users can upload files that can be rated, commented etc.) I am aware of the number of board CMSes that have file-database plugins, but that is not what I want. I want to have a system that focuses on the files rather than on the postings. Or do you have any alternative ideas on solutions to the problem? I need the following functions CMS focused on file management Files that can be categorized (in a tree-like view for example) Comments and ratings can be added to files Users that can be provided various rights around the platform (moderation, commenting, exclusion of non-registered users, etc.) Users can upload files themselves (for further moderation

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >