Search Results

Search found 80596 results on 3224 pages for 'simplexml load file'.

Page 247/3224 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • shell_exec() Doesn't Show The Output

    - by Nathan Campos
    I'm doing a PHP site that uses a shell_exec() function like this: $file = "upload/" . $_FILES["file"]["name"]; $output = shell_exec("leaf $file"); echo "<pre>$output</pre>"; Where leaf is a program that is located in the same directory of my script, but when I tried to run this script on the server, I just got nothing. What is wrong?

    Read the article

  • Writing multiple NSData to File

    - by user326943
    I need a hint on how to write multiple NSData chunks to single file. Downloading a file using NSURLConnection in chunks. Each chunk is downloaded in a separate NSOperation thread. As the chunks finish downloading they need to be written to a file so combined result is the file downloaded. What would be the best way to manage the NSData that is returned and writing it to a single file?

    Read the article

  • How to change filetype association in the registry?

    - by Sergio Tapia
    Hi there, first time posting in StackOverflow. :D I need my software to add a couple of things in the registry. My program will use Process.Start(@"blblabla.smc"); to launch a file, but the problem is that most likely the user will not have a program set as default application for the particular file extension. How can I add file associations to the WindowsRegistry?

    Read the article

  • Test if file exists

    - by klaus-vlad
    Hi, I'm trying to open a file in android like this : try { FileInputStream fIn = context.openFileInput(FILE); DataInputStream in = new DataInputStream(fIn); BufferedReader br = new BufferedReader(new InputStreamReader(in)); if(in!=null) in.close(); } catch(Exception e) { } , but in case the file does not exists a file not found exception is thrown . I'd like to know how could I test if the file exists before attempting to open it.

    Read the article

  • Rails: Accessing /lib Modules from Controller

    - by Dex
    I have a Module called /lib/string_parser.rb. It looks like: module StringParser def wrap_lines(input, chars) ... end #make available to views def self.included(base) base.send :helper_method, :my_method_for_views if base.respond_to? :helper_method end end I'm trying to call wrap_lines from the create method of my controller but no matter what I do, I keep getting NoMethodErrors for an undefined method.

    Read the article

  • NSUserDefaults: Saved Number Always 0, iPhone

    - by Stumf
    Hello all, I have looked at other answers and the docs. Maybe I am missing something, or maybe I have another issue. I am trying to save a number on exiting the app, and then when the app is loaded I want to check if this value exists and take action accordingly. This is what I have tried: To save on exiting: - (void)applicationWillTerminate: (UIApplication *) application { double save = [label.text doubleValue]; [[NSUserDefaults standardUserDefaults] setDouble: save forKey: @"savedNumber"]; [[NSUserDefaults standardUserDefaults] synchronize]; } To check: - (IBAction)buttonclickSkip{ double save = [[NSUserDefaults standardUserDefaults] doubleForKey: @"savedNumber"]; if (save == 0) { [self performSelector:@selector(displayAlert) withObject:nil]; test.enabled = YES; test.alpha = 1.0; skip.enabled = NO; skip.alpha = 0.0; } else { label.text = [NSString stringWithFormat:@"%.1f %%", save]; } } The problem is I always get my alert message displayed, the saved value is not put into the label so somehow == 0 is always true. If it makes any difference I am testing this on the iPhone simulator. Many thanks, Stu

    Read the article

  • Minify an Entire Directory While Keeping Element/Style/Script Relationships?

    - by Jonathan Sampson
    Do any solutions currnetly exist that can minify an entire project directory? More importantly, do any solutions exist that can shorten classnames, id's, and keep them consistent throughout all documents? Something that can turn this: Index.html --- <div class="fooBar"> <!-- Code --> </div> Styles.css --- .fooBar { // Comments and Messages background-color:#000000; } Index.js --- $(".fooBar").click(function(){ /* More Comments */ alert("fooBar"); }); Into This: Index.html --- <div class="a"></div> Styles.css --- .a{background-color:#000;} Index.js --- $(".a").click(function(){alert("fooBar");});

    Read the article

  • Bulkloading schema less entities on Google App Engine

    - by Rahul
    The new bulkloader added into SDK 1.3.4 works great for models that have a schema. For models inheriting db.Expando (or loosely defined schemas) i would like to understand what i would have to do to bulk upload them. I defined a custom connector, that implemented the ConnectorInterface and converted my data to the python dict required. How can i use this dict to define entities that get uploaded to the data store ? In the documentation there seems to be a post_import_function that can be used to return the entities that get uploaded. Is there an example on how this function is used ?

    Read the article

  • SQL Server 2008 BULK INSERT causes more reads than writes. Why?

    - by sh1ng
    I've huge a table (a few billion rows) with a clustered index and two non-clustered indices. A BULK INSERT operation produces 112000 reads and only 383 writes (duration 19948ms). It's very confusing to me. Why do reads exceed writes? How can I reduce it? update query insert bulk DenormalizedPrice4 ([DP_ID] BigInt, [DP_CountryID] Int, [DP_OperatorID] SmallInt, [DP_OperatorPriceID] BigInt, [DP_SpoID] Int, [DP_TourTypeID] Int, [DP_CheckinDate] Date, [DP_CurrencyID] SmallInt, [DP_Cost] Decimal(9,2), [DP_FirstCityID] Int, [DP_FirstHotelID] Int, [DP_FirstBuildingID] Int, [DP_FirstHotelGlobalStarID] Int, [DP_FirstHotelGlobalMealID] Int, [DP_FirstHotelAccommodationTypeID] Int, [DP_FirstHotelRoomCategoryID] Int, [DP_FirstHotelRoomTypeID] Int, [DP_Days] TinyInt, [DP_Nights] TinyInt, [DP_ChildrenCount] TinyInt, [DP_AdultsCount] TinyInt, [DP_TariffID] Int, [DP_DepartureCityID] Int, [DP_DateCreated] SmallDateTime, [DP_DateDenormalized] SmallDateTime, [DP_IsHide] Bit, [DP_FirstHotelAccommodationID] Int) with (CHECK_CONSTRAINTS) No triggers & foreign keys Cluster Index by DP_ID and two non-unique indexes(with fillfactor=90%) And one more thing DB stored on RAID50 with stripe size 256K

    Read the article

  • File Upload via PHP and AntiVirus in Linux?

    - by wag2639
    I was wondering, if I was making a file or image hosting/transfer site, whether or not there was a good approach to check for viruses for files that users are uploading? I was thinking of this: Use traditional PHP file upload form to upload the file to the server. Put files in a queue folder Move the queue folder to a "process" folder, and replace queue folder after a predetermined limit (time, cronjob, file count, collective file size) Run a command line virus scan on files in process folder Place safe files in holding area for use Is this a good approach?

    Read the article

  • Signable, streamable, "readable" archive format?

    - by alexvoda
    Is there any archive format that offers the following: be digitally sign-able with a digital certificate from a trusted source like Verisign - for preventing changes to the file (I am not referring to read only, but in case the file was changed it should no longer be signed telling the user this is not the original file) be stream-able - be able to be opened even if not all of the content has been transfered (also not strictly linearly) be "readable" - be able to read the data without extracting to a temporary folder (AFAIK if you open a file in a zip archive it is extracted first, and this stays true even for zip based formats like OOXML. This is not what I want) be portable - support on at least Windows, Linux and Mac OS X is a must, or at least future support be free of patents - Be open source - also preferably a license that allows commercial use(as far as i know GPL a share-alike licence so it doesn't allow comercial use, BSD on the other hand alows it) Note: Though it may come in handy eventually I can not think right now of a scenario that would require both point 1 and point 2 simultaneously. Or lets leave it a be able to check the signature only when the whole file was downloaded. I am not interested in: being able to be compressed being supported on legacy systems Does any existing archive format fit this description (tar evolutions like DAR and pax come to mind) ? If there is, are there programing libraries available for the above mentioned OSs? If not, would it be hard to create such a thing? EDIT: clarrified piont 5 EDIT 2: added a note to clarify point 1 and 2 P.S.: This is my first question on StackOverflow

    Read the article

  • Apache and PHP write permission?

    - by thedp
    Hello, I have a php script that writes to a file. But when I try to actually write to the file I get permission denied. How can I tell what user name I need to add to the file permission in order for the php to write to it? Thank you.

    Read the article

  • How to navigate through folders without opening a new buffer in Emacs?

    - by Vivi
    (I don't know if it is important, but I am using Aquamacs - and yeah, I am a newbie) When I want to open a file I use C-x C-f and usually just chose my home folder. From there I go clicking on folders until I find the file I want. I really don't like thought that each new file I open creates a new buffer. This is especially annoying because I am using Aquamacs with the window (tab) option enabled. I know I could simply get rid of the window option, but I quite like it, except of course for when I open a file and a zillion tabs are created. So my question is: is there a way to disable the new buffer creation when "navigating" through folders? Thanks :)

    Read the article

  • automaticaly place downloaded files in folder bsed on downloads extention sufix

    - by Bart van Tuÿl
    I’m making a simple web browser for work eeh, what I’d like to know is if its possible to save a file of a particular extension to a particular file. I currently use googels chrome when downloading a file it places this (regardless of extension) in a downloads folder without asking where I ant to download this too. I want to achieve the same except that downloads with the extension .dwg are placed automatically in a folder named DWG DOWNLAODS… Dose anybody know how to achieve this in vb.net?

    Read the article

  • Creating your own UTI for an iOS app.

    - by Kalle
    Hello all, The app I'm developing has a custom file format for its files, and I'd like to be able to use the "Open In ..." feature of iOS which lets users e.g. email each other a file and then open it directly in the app. I've gotten as far as adding the CFBundleDocumentType stuff in the Info.plist file, but the problem is the LSItemContentTypes. From what I've read, I need to provide the actual file as a UTI, rather than just saying ".myfileextension", and I can't find a lot about how to create UTI's in a iOS app. Anyone know?

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • Asp.net Site on GoDaddy Loads Slowly

    - by Randy
    I just published a very small site to GoDaddy that I programmed in Microsoft Visual Web Developer 2008. The site only contains 6 sheets, none of which are data intensive. Each has a few small images that serve mostly to navigate to the other sheets. There is no data, no SQL, nothing like that. There is one master page that governs the page layout for all of the pages. Everything works fine, for the most part. I am posting because the site loads quite slowly, particularly considering how little content is being loaded. Can anybody give me any advice about what I should look at to speed this thing up a little bit? Is this an asp.net issue? Use of master pages? Godaddy? Something else? I'm a newbie, so speak slowly. Thanks.

    Read the article

  • How can I detect 'any' ajax request being completed using jQuery?

    - by Brian Scott
    I have a page where I can insert some javascript / jquery to manipulate the output. I don't have any other control over the page markup etc. I need to add an extra element via jquery after each present on the page. The issue is that the elements are generated via an asynchronous call on the existing page which occurs after $(document).ready is complete. Essentially, I need a way of calling my jquery after the page has loaded and the subsequent ajax calls have completed. Is there a way to detect the completion of any ajax call on the page and then call my own custom function to insert the additional elements after the newly created s ?

    Read the article

  • Write problem - lossing the original data

    - by John
    Every time I write to the text file I will lose the original data, how can I read the file and enter the data in the empty line or the next line which is empty? public void writeToFile() { try { output = new Formatter(myFile); } catch(SecurityException securityException) { System.err.println("Error creating file"); System.exit(1); } catch(FileNotFoundException fileNotFoundException) { System.err.println("Error creating file"); System.exit(1); } Scanner scanner = new Scanner (System.in); String number = ""; String name = ""; System.out.println("Please enter number:"); number = scanner.next(); System.out.println("Please enter name:"); name = scanner.next(); output.format("%s,%s \r\n", number, name); output.close(); }

    Read the article

  • Using altered RewriteCond to check if file exists in multiple locations

    - by futuraprime
    I want to use mod-rewrite to check for a file in two separate locations: at the requested URL, and at the requested URL within the "public" directory. Here's what I have so far: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{DOCUMENT_ROOT}/public/$0 !-f #this line isn't working RewriteRule ^(.*)$ index.php?$1 [L] It can correctly determine if the file is there, but it's not correctly determining if the file is at /public/supplied/file/path/file.extension. How do I test for that?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • How to test my GAE site for performance

    - by Sergey Basharov
    I am building a GAE site that uses AJAX/JSON for almost all its tasks including building the UI elements, all interactions and client-server requests. What is a good way to test it for highloads so that I could have some statistics about how much resources 1000 average users per some period of time would take. I think I can create some Python functions for this purpose. What can you advise? Thanks.

    Read the article

  • Android - cant read TXT files from SDcard on real mashine?

    - by JustMe
    Hello! When I run the code bellow in the virtual android (1.5) it works well, TextSwitcher shows first 80 chars from each txt file from /sdcard/documents/ , but when I run it on my Samsung Galaxy i7500 (1.6) there are no contents in TextSwitcher, however in LogCat there are FileNames of txt files. My Code: public void getTxtFiles(){ //Scan /sdcard/documents and put .txt files in array File TxtFiles[] String path = Environment.getExternalStorageDirectory().toString()+"/documents/"; String files; File folder = new File(path); if(folder.exists()==false){if (!folder.mkdirs()) { Log.e("TAG", "Create dir in sdcard failed"); return; }} else{ File listOfFiles[] = folder.listFiles(); for (int i = 0; i < listOfFiles.length; i++) { if (listOfFiles[i].isFile()) { files = listOfFiles[i].getName(); if (files.endsWith(".txt") || files.endsWith(".TXT")) { if((files.length()-1)>i){resizeArray(TxtFiles, files.length()+10);} TxtFiles[i]=listOfFiles[i]; System.out.println(TxtFiles[i]); } } }} } private void updateCounter(int Pozicija) { if(Pozicija<0){Toast.makeText(getApplicationContext(), R.string.LastTxt, 5).show(); mCounter++;} else if(TxtFiles[mCounter]!=null){ TextToShow = getContents(TxtFiles[mCounter]); if(TextToShow.length()>80)TextToShow=TextToShow.substring(0, 80); mSwitcher.setText(TextToShow); System.out.println(Pozicija); } else mCounter--; } static public String getContents(File aFile) { //...checks on aFile are elided StringBuilder contents = new StringBuilder(); try { //use buffering, reading one line at a time //FileReader always assumes default encoding is OK! BufferedReader input = new BufferedReader(new FileReader(aFile)); try { String line = null; //not declared within while loop /* * readLine is a bit quirky : * it returns the content of a line MINUS the newline. * it returns null only for the END of the stream. * it returns an empty String if two newlines appear in a row. */ while (( line = input.readLine()) != null){ contents.append(line); contents.append(System.getProperty("line.separator")); } } finally { input.close(); } } catch (IOException ex){ ex.printStackTrace(); } return contents.toString(); } And I am able to write contents of those files though LogCat! Any ideas?

    Read the article

  • .NET read binary contents of .lnk file

    - by Flores
    I want to read the binary contents of a .lnk file. As long as the target of the shortcut (lnk file) exists this works fine with IO.File.ReadAllBytes(string file). BUT If the target of the shortcut does not exist (believe me I want this) the method only returns zero's. I guess this is because the OS follows the link and if it does not exist it returns zero's Is there some way to bypass the fact that the framework follows the target of the .lnk before displaying the contents of the .lnk file?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >