Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 230/1640 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • IIS not serving up .dat files.

    - by Stu
    Hi all, I have a ASP MVC web application that uses a plugin to load images and points for a 3d application. When debugging with the the Visual Studio development server the images and the points are served up great... http://i148.photobucket.com/albums/s19/littleniv/Debugging/local.png Second image: same url but iis.png When running in IIS 7 though the .Dat point files do not serve and produce a 404. I've noticed the caching is marked as private in fiddler, but i don't know what this means. Can anyone help? Cheers, Stu

    Read the article

  • Newbie Question: Read and Process a List of Text Files

    - by johnv
    I'm completely new to .NET and am trying as a first step to write a text processing program. The task is simple: I have a list of 10,000 text files stored in one folder, and I'm trying to read each one, store it as a string variable, then run it through a series of functions, then save the final output to another folder. So far I can only manage to manually input the file path like this (in VB.NET): Dim tRead As System.IO.StreamReader Public Function ReadFile() As String Dim EntireFile As String tRead = File.OpenText("c:\textexample\00001.txt") EntireFile = tRead.ReadToEnd Return EntireFile End Function Public Function Step1() ..... End Function Public Function Step2() ..... End Function .............. I'm wondering, therefore, if there's a way to automate this process. Perhaps for example store all input file path into a text file then read each entry at a time, then save the final output into the save path, again listed in a text file. Any help is greatly appreciated. ReplyQuote

    Read the article

  • Logical python question - handeling directories and files in them

    - by Konstantin
    Hello! I'm using this function to extract files from .zip archive and store it on the server: def unzip_file_into_dir(file, dir): import sys, zipfile, os, os.path os.makedirs(dir, 0777) zfobj = zipfile.ZipFile(file) for name in zfobj.namelist(): if name.endswith('/'): os.mkdir(os.path.join(dir, name)) else: outfile = open(os.path.join(dir, name), 'wb') outfile.write(zfobj.read(name)) outfile.close() And the usage: unzip_file_into_dir('/var/zips/somearchive.zip', '/var/www/extracted_zip') somearchive.zip have this structure: somearchive.zip 1.jpeg 2.jpeg another.jpeg or, somethimes, this one: somearchive.zip somedir/ 1.jpeg 2.jpeg another.jpeg Question is: how do I modify my function, so that my extracted_zip catalog would always contain just images, not images in another subdirectory, even if images are stored in somedir inside an archive.

    Read the article

  • Delphi Search files and directories fastest alghorithm

    - by radu-barbu
    Hi, I'm using Delphi7 and i need a solution to a big problem.Can someone provide me a faster way for searching through files and folders than using findnext and findfirst? because i also process the data for each file/folder (creation date/author/size/etc) and it takes a lot of time...I've searched a lot under WinApi but probably I haven't see the best function in order to accomplish this. All the examples which I've found made in Delphi are using findfirst and findnext... Also, I don't want to buy components or use some free ones... Thanks in advance!

    Read the article

  • how to open many files simultaneously for reading in c

    - by monkeyking
    I'm trying to port some of my c++ code into c. I have the following construct class reader{ private: FILE *fp; alot_of_data data;//updated by read_until() method public: reader(const char*filename) read_until(some conditional dependent on the contents of the file, and the arg supplied) } Im then instantiating hundreds of these object and iterate over them using several 'read_until()' for each file until allfiles is at eof. I'm failing to see any clever way to do this in c, the only solution I can come up with is making an array of FILE pointers, and do the same with all the private member data from my class. But this seems very messy, can I implement the functionality of my class as a function pointer, or anything better, I think I'm missing a fundamental design pattern? The files are way to big to have all in memory, so reading everything from every file is not feasible Thanks

    Read the article

  • How to add multiple files to py2app?

    - by Niek de Klein
    I have a python script which makes a GUI. When a button 'Run' is pressed in this GUI it runs a function from an imported package (which I made) like this from predictmiP import predictor class MiPFrame(wx.Frame): [...] def runmiP(self, event): predictor.runPrediction(self.uploadProtInterestField.GetValue(), self.uploadAllProteinsField.GetValue(), self.uploadPfamTextField.GetValue(), \ self.edit_eval_all.Value, self.edit_eval_small.Value, self.saveOutputField) When I run the GUI directly from python it all works well and the program writes an output file. However, when I make it into an app, the GUI starts but when I press the button nothing happens. predictmiP does get included in build/bdist.macosx-10.3-fat/python2.7-standalone/app/collect/, like all the other imports I'm using (although it is empty, but that's the same as all the other imports I have). How can I get multiple python files, or an imported package to work with py2app?

    Read the article

  • Edit PDF files dynamically from Flash or Flex

    - by TandemAdam
    I am planning to do a CD-ROM in either Flash or Flex, possibly using the Adobe AIR runtime. This CD interactive will have a bunch of forms on it for the user to fill out. After they fill in a form, they will have the option of saving or printing a PDF that is based on there information. I am trying to find a way of editing the content of the PDF in Flash, so when the user fills out the form, the application will fill in the PDF with there details from the form fields. Is this is possible? It would be great if there was some way of having template PDFs (either on the CD as there own files, or in a Flash library), then flash could come along and fill in the specific fields inside the PDF. Can Adobe AIR help me in any way here?

    Read the article

  • How to securely serve S3 files to blog

    - by Hugo Palma
    I'm starting a blog and i'm in the process of choosing where should i host it. For now i want a free solution like Blogger or Wordpress.com. The problem i'm facing is that i want to use files i have in a S3 bucket in my blog but none of the blog solutions i found supports any kind of server code, which means that in order to use S3 query string authentication i would have to put vulnerable information in the client. For obvious reasons i don't want to do that. So, i'm looking for ideas on how i can safely include content from S3 in a free blog host.

    Read the article

  • More efficient way to find & tar millions of files

    - by Stu Thompson
    I've got a job running on my server at the command line prompt for a two days now: find data/ -name filepattern-*2009* -exec tar uf 2008.tar {} ; It is taking forever, and then some. Yes, there are millions of files in the target directory. But just running... find data/ -name filepattern-*2009* -print > filesOfInterest.txt ...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks.. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script? A secondary questions is "why is my current approach so slow?"

    Read the article

  • Performance testing on .xap files...

    - by Radhi
    Hi All, I want to know that can i use profiler to do performance testing of .xap files. if you have any articles for the same topic please provide it to me. and if there are any other tools available to do this please tell me. in my project we have to check that when we logged into the Silverlight 4 .0 application. the screen takes 5 seconds to load. so i have to check which method is taking time to do this. in our project there are services which calls other services too,, and we have used CAL. so need to identify the bottleneck... please help...

    Read the article

  • Programmatically changing code files

    - by Carra
    I'm changing our webservices to an async model. And for that I have to change over a hundred methods. Doing it manually is one (unappealing) option. Is there no way to programmatically parse & change multiple functions/code files? Example: [Webmethod] public void MyWebservice (string parameter1, string parameter2, string parameter3) { //Logic here } And change this to: public void InternalMyWebservice (string parameter1, string parameter2, string parameter3, AsyncCallback callback) { //Logic here } [Webmethod] public void BeginMyWebservice (string parameter1, string parameter2, string parameter3, AsyncCallback callback, object asyncState) { //Queue InternalMyWebservice in a threadpool } public void EndMyWebservice(IAsyncResult asyncResult) { //Set return values } It's basically the same thing I have to do for each webservice. Change the name to "InternalX", add a parameter and create the begin & end method.

    Read the article

  • Renaming files: Visual Studio vs Version control

    - by Benjol
    The problem with renaming files is that if you want to take advantage of Visual Studio refactoring, you really need to do it from inside Visual Studio. But most (not all*) version control system also want to be the ones doing the renaming. One solution is to use integrated source control, but this is not always available, and in some cases is pretty clunky. I'd personally be more comfortable using source control separately, outside of Visual Studio, but I'm not sure how to manage this question of file renames. So, for those of you that use Visual Studio, which source control do you use? Do you use a VS integration (which one?) and otherwise, how do you resolve this renaming problem? (* git is smart enough to work it out for itself)

    Read the article

  • Adobe Reader 9.0 memory leak while loading-unloading PDF files every one second indefinitely

    - by Total Starnger
    I have c++ written MFC based application that has PDF object viewer as a part of the implementation. A whole thing works just fine with Adobe Reader 8.0. Once I switched to Adobe Reader 9.0 as a default PDF reader, I keep experiencing small memory leak that forces my application to crash within a half an hour of continuous loading-unloading different PDF files. Any ideas what might cause this memory leak and is there any cure besides replacing Adobe Reader 9.0 with anything else? (Works fine with Foxit PDF reader as well, by the way..)

    Read the article

  • Consolidating files in a single directory before you link them into the final executable

    - by David
    I am working on Solaris 10, Sun Studio 11. I am refactoring some old code, and trying to write unit tests for them. My make file looks like: my_model.o:my_model.cc CC -c my_model.cc -I/../../include -library=stlport4 -instances=extern unit_test: unit_test.o my_model.o symbol_dictionary.o CC -o unit_test unit_test.o my_model.o symbol_dictionary.o -I../../include \ -library=stlport4 -instances=extern unit_test.o: unit_test.cc CC -c unit_test.cc -I/../../include -library=stlport4 -instances=extern symbol_dictionary.o: cd ../../test-fixtures && ($MAKE) symbol_dictionary.o mv ../../test-fixtures/symbol_dictionary.o . In the ../../test-fixtures makefile, I have the following target: symbol_dictionary.o: CC -c symbol_dictionary.cc -I/../../include -library=stlport4 -instances=extern I do the instances=extern because I had linking problems before, and this was the recommended solution. The consequence is in each directory that is being compiled, a SunWS_Cache directory is created to store the template instances. This is the long way to get to this question. Is it a standard practice to consolidate object files in a single directory before you link them?

    Read the article

  • Remote stream multiple files in SOLR

    - by Mark
    I want to use SOLR's remote-streaming facility to extract and index the content of files. This works fine if I pass stream.file=xxx as a parameter to the http GET method. However, I have a lot of these, and want to batch them up (i.e. not have to have a GET per file). Is there a way I can do this in SOLR? e.g. I'd like to be able to POST some xml like this: <add> <doc stream_file="filename"> <field name="id">123</field> </doc> <doc>...

    Read the article

  • C#.NET: FTP Several Files

    - by Steve Kiss
    Hi, I have what would seem like a common problem, but I cannot find an appropriate solution on any forums. I need to FTP an entire directory structure using .NET. I have found several code examples all of which show how you can FTP a single file by creating an FtpWebRequest object. Unfortunately, there is no information on how to deal with several files. Do I simply create a FtpWebRequest object for every single file? Any help would be appreciated. Thanks

    Read the article

  • Apache server-side files caching via .htaccess?

    - by purpler
    Hi, I'm starting new website and gonna include several JS libs and would like to know how .htaccess file template should look like with caching of media and JS files on? Whats better for compression, GZip or Deflate? Is it better/faster solution to serve those JS libs of the Google CDN perhaps then locally? I'm asking CDN question since some of scripts served off GoogleCDN are potentially going to update and eventually break the website layout so i thought it would be better for me to host them locally and cache via webserver if its going to work with same/near-same speed.

    Read the article

  • Same html file for different topics on Help Files

    - by Gpoy Gwapo
    I have a problem with the help files. For example i have this html file named search.html and there are 2 topics on my table of contents namely Name Search and Account Search. The same search.html will be used for the 2 topics. Now, i want to open the help file and point it to Name Search but instead, the treeview item that is opened on the table of contents is for the Account Search. How can i set it so that the the Name Search will be the one selected on the table of contents?

    Read the article

  • Use multiple css files or a single file organised by comments

    - by David
    Hi, what is regarded as the best approach to organising css. At the moment I am using a single link in the head of my xhtml documents as follows: <link rel="stylesheet" type="text/css" href="style/imports.css" /> In this file im importing several different css files i.e. reset.css, structure.css, skin.css I know there is an overhead in doing this as each requires an extra trip to the server but it makes things much more logical and organised in my opinion. Does anyone have an opinion on how best to organise their css. - Would it be better to put all these seperate css funcions into one single file? Also, is it best practice to minify css.

    Read the article

  • No such directory when committing files using CVS from Eclipse

    - by Peter
    Hi all I am using Eclipse solely as a CVS client. Mostly it works very good, but once in a while, when I am right-clicking a file to commit it (a file that it itself says is changed and ready for commit), it tells me "cvs [server aborted]: no such directory `...' But the directory DOES exist?! If I navigate to the folder using tortoise CVS and right click for commit, it works fine? (this is the way I finally have to get those files committed) Has anyone experienced this as well, and more importantly - has anyone solved it? :) I am running Vista on the client PC and the CVS server is a Windows Server 2003 (please dont tell me to just switch to Linux - that is not much help). Thank you very much for your help!

    Read the article

  • PHP readdir() not returning files in alphabetical order

    - by Buggabill
    I am reading through a directory with some pictures and such using a pretty simple implementation of readdir() like the following: if ($handle = opendir($path)) { while (false !== ($szFilename = readdir($handle))) { if ($szFilename[0] !== '.') { if (is_file($path.$szFilename)) { // do stuff } } } } The problem that I am having is that the files are not being read in alphabetical order as the docs for readdir() state: Returns the filename of the next file from the directory. The filenames are returned in the order in which they are stored by the filesystem. Another weird thing is that, on the local testing server, the same code works great. This is running on a server using the LAMP stack in both cases. I know that I can build an array and just sort it, but I was wondering if I was missing something in what I was doing. Thanks for any insight!

    Read the article

  • How to use files/streams as source/sink in PulseAudio

    - by Nilesh
    I'm a PulseAudio noob, and I'm not sure if I'm even using the correct terminology. I've seen that PulseAudio can perform echo cancellation, but it needs a source and a sink to filter from, and a new source and sink. I can provide my mic and my audio-out as the source and sink, right? Now, here's my situation: I have two video streams, say, rtmp streams, or consider two flv files, say at any given moment, stream X is the input stream that's coming from another computer's webcam+mic and stream Y is the output stream that I'm sending, (and it's coming from my computer's webcam+mic). Question: Back to the first paragraph - here's the thing, I don't want to use my mic and my audio-out, instead, I want to use these two "input" and "output" streams as my source and sink so to speak (of course, I'll use xuggler maybe, to extract just the audio from X and Y). It may be a strange question, and I have my reasons for doing this strange this - I need to experiment and verify the results to see.

    Read the article

  • Android: Is decreasing size of .png files have some effect to resulted Bitmap in memory

    - by nahab
    I'm writing game with a large amount of .png pictures. All worked fine. Than I added new activity with WebView and got memory shortage. After that I made some experiment - replace game .png images with ones that just fully filled with some color. As result memory shortage had gone. But I suppose that Bitmap internally hold each pixel separately so such changes should have no effect. Maybe this because of initial images have alpha channel and my test images have not it? But actually question is: Will decreasing .png images files sizes make some effect on decreasing usage of VM application heap or not?

    Read the article

  • Comparing two xml files

    - by Ragini
    I have two large xml files. Almost 1.4 mb each. I want to compare them and see the differing part. I am using linux. Is there any free tool which can do this for me ? Or any other technique ? I used "diff" command in linux and tried to output the result in another file. (diff file1.xml file2.xml result.xml) But the resulted file showed "Could not parse the xml". However it showed something on screen. I would like the differ part to be stored somewhere if possible. (or atleast I should be able to see it properly) Thanks Ragini

    Read the article

  • Import a number of csv files and replace NA by Zeros

    - by tao.hong
    I know how to do this individually. However, I have more than 1000 files. I decided to use a for loop. However, it seems like I did not find the correct way to evaluate my variables. Here is my code setwd('C:/data') filenames=dir() #find file names for (i in filenames){ adt = substr(x = i, start = 1, stop = nchar(i)-4) name=paste("data_", adt, sep="") assign(name, read.csv(i,header=T,sep=",")) #read each file and assign a variable name starting with data_ to it func=paste('name[is.na(name)] <- 0',sep="") # here is the place I have problem. R will not consider name is a parameter whose values change in each iteration eval((text=func)) }

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >