Search Results

Search found 85647 results on 3426 pages for 'file write'.

Page 21/3426 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • How to download .txt file from a url?

    - by Colin Roe
    I produced a text file and is saved to a location in the project folder. How do I redirect them to the url that contains that text file, so they can download the text file. CreateCSVFile creates the csv file to a file path based on a datatable. Calling: string pth = ("C:\\Work\\PG\\AI Handheld Website\\AI Handheld Website\\Reports\\Files\\report.txt"); CreateCSVFile(data, pth); And the function: public void CreateCSVFile(DataTable dt, string strFilePath) { StreamWriter sw = new StreamWriter(strFilePath, false); int iColCount = dt.Columns.Count; for (int i = 0; i < iColCount; i++) { sw.Write(dt.Columns[i]); if (i < iColCount - 1) { sw.Write(","); } } sw.Write(sw.NewLine); // Now write all the rows. foreach (DataRow dr in dt.Rows) { for (int i = 0; i < iColCount; i++) { if (!Convert.IsDBNull(dr[i])) { sw.Write(dr[i].ToString()); } if (i < iColCount - 1) { sw.Write(","); } } sw.Write(sw.NewLine); } sw.Close(); Response.WriteFile(strFilePath); FileInfo fileInfo = new FileInfo(strFilePath); if (fileInfo.Exists) { //Response.Clear(); //Response.AddHeader("Content-Disposition", "attachment; filename=" + fileInfo.Name); //Response.AddHeader("Content-Length", fileInfo.Length.ToString()); //Response.ContentType = "application/octet-stream"; //Response.Flush(); //Response.TransmitFile(fileInfo.FullName); } }

    Read the article

  • Copy mdf file and use it in run time

    - by Anibas
    After I copy mdf file (and his log file) I tries to Insert data. I receive the following message: "An attempt to attach an auto-named database for file [fileName].mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. When I copied the file manual everything worked normally. Is it correct the order File.Copy leaves the file engaged?

    Read the article

  • Parse a text file into multiple text file

    - by Vijay Kumar Singh
    I want to get multiple file by parsing a input file Through Java. The Input file contains many fasta format of thousands of protein sequence and I want to generate raw format(i.e., without any comma semicolon and without any extra symbol like "", "[", "]" etc) of each protein sequence. A fasta sequence starts form "" symbol followed by description of protein and then sequence of protein. For example ? lcl|NC_000001.10_cdsid_XP_003403591.1 [gene=LOC100652771] [protein=hypothetical protein LOC100652771] [protein_id=XP_003403591.1] [location=join(12190..12227,12595..12721,13403..13639)] MSESINFSHNLGQLLSPPRCVVMPGMPFPSIRSPELQKTTADLDHTLVSVPSVAESLHHPEITFLTAFCL PSFTRSRPLPDRQLHHCLALCPSFALPAGDGVCHGPGLQGSCYKGETQESVESRVLPGPRHRH Like above formate the input file contains 1000s of protein sequence. I have to generate thousands of raw file containing only individual protein sequence without any special symbol or gaps. I have developed the code for it in Java but out put is : Cannot open a file followed by cannot find file. Please help me to solve my problem. Regards Vijay Kumar Garg Varanasi Bharat (India) The code is /*Java code to convert FASTA format to a raw format*/ import java.io.*; import java.util.*; import java.util.regex.*; import java.io.FileInputStream; // java package for using regular expression public class Arrayren { public static void main(String args[]) throws IOException { String a[]=new String[1000]; String b[][] =new String[1000][1000]; /*open the id file*/ try { File f = new File ("input.txt"); //opening the text document containing genbank ids FileInputStream fis = new FileInputStream("input.txt"); //Reading the file contents through inputstream BufferedInputStream bis = new BufferedInputStream(fis); // Writing the contents to a buffered stream DataInputStream dis = new DataInputStream(bis); //Method for reading Java Standard data types String inputline; String line; String separator = System.getProperty("line.separator"); // reads a line till next line operator is found int i=0; while ((inputline=dis.readLine()) != null) { i++; a[i]=inputline; a[i]=a[i].replaceAll(separator,""); //replaces unwanted patterns like /n with space a[i]=a[i].trim(); // trims out if any space is available a[i]=a[i]+".txt"; //takes the file name into an array try // to handle run time error /*take the sequence in to an array*/ { BufferedReader in = new BufferedReader (new FileReader(a[i])); String inline = null; int j=0; while((inline=in.readLine()) != null) { j++; b[i][j]=inline; Pattern q=Pattern.compile(">"); //Compiling the regular expression Matcher n=q.matcher(inline); //creates the matcher for the above pattern if(n.find()) { /*appending the comment line*/ b[i][j]=b[i][j].replaceAll(">gi",""); //identify the pattern and replace it with a space b[i][j]=b[i][j].replaceAll("[a-zA-Z]",""); b[i][j]=b[i][j].replaceAll("|",""); b[i][j]=b[i][j].replaceAll("\\d{1,15}",""); b[i][j]=b[i][j].replaceAll(".",""); b[i][j]=b[i][j].replaceAll("_",""); b[i][j]=b[i][j].replaceAll("\\(",""); b[i][j]=b[i][j].replaceAll("\\)",""); } /*printing the sequence in to a text file*/ b[i][j]=b[i][j].replaceAll(separator,""); b[i][j]=b[i][j].trim(); // trims out if any space is available File create = new File(inputline+"R.txt"); try { if(!create.exists()) { create.createNewFile(); // creates a new file } else { System.out.println("file already exists"); } } catch(IOException e) // to catch the exception and print the error if cannot open a file { System.err.println("cannot create a file"); } BufferedWriter outt = new BufferedWriter(new FileWriter(inputline+"R.txt", true)); outt.write(b[i][j]); // printing the contents to a text file outt.close(); // closing the text file System.out.println(b[i][j]); } } catch(Exception e) { System.out.println("cannot open a file"); } } } catch(Exception ex) // catch the exception and prints the error if cannot find file { System.out.println("cannot find file "); } } } If you provide me correct it will be much easier to understand.

    Read the article

  • Designing a database file format

    - by RoliSoft
    I would like to design my own database engine for educational purposes, for the time being. Designing a binary file format is not hard nor the question, I've done it in the past, but while designing a database file format, I have come across a very important question: How to handle the deletion of an item? So far, I've thought of the following two options: Each item will have a "deleted" bit which is set to 1 upon deletion. Pro: relatively fast. Con: potentially sensitive data will remain in the file. 0x00 out the whole item upon deletion. Pro: potentially sensitive data will be removed from the file. Con: relatively slow. Recreating the whole database. Pro: no empty blocks which makes the follow-up question void. Con: it's a really good idea to overwrite the whole 4 GB database file because a user corrected a typo. I will sell this method to Twitter ASAP! Now let's say you already have a few empty blocks in your database (deleted items). The follow-up question is how to handle the insertion of a new item? Append the item to the end of the file. Pro: fastest possible. Con: file will get huge because of all the empty blocks that remain because deleted items aren't actually deleted. Search for an empty block exactly the size of the one you're inserting. Pro: may get rid of some blocks. Con: you may end up scanning the whole file at each insert only to find out it's very unlikely to come across a perfectly fitting empty block. Find the first empty block which is equal or larger than the item you're inserting. Pro: you probably won't end up scanning the whole file, as you will find an empty block somewhere mid-way; this will keep the file size relatively low. Con: there will still be lots of leftover 0x00 bytes at the end of items which were inserted into bigger empty blocks than they are. Rigth now, I think the first deletion method and the last insertion method are probably the "best" mix, but they would still have their own small issues. Alternatively, the first insertion method and scheduled full database recreation. (Probably not a good idea when working with really large databases. Also, each small update in that method will clone the whole item to the end of the file, thus accelerating file growth at a potentially insane rate.) Unless there is a way of deleting/inserting blocks from/to the middle of the file in a file-system approved way, what's the best way to do this? More importantly, how do databases currently used in production usually handle this?

    Read the article

  • Reading data from text file in C

    - by themake
    I have a text file which contains words separated by space. I want to take each word from the file and store it. So i have opened the file but am unsure how to assign the word to a char. FILE *fp; fp = fopen("file.txt", "r"); //then i want char one = the first word in the file char two = the second word in the file

    Read the article

  • Flex - How to edit a xml file on the server

    - by BS_C3
    Hi! I was wondering if it was possible to edit an xml on the server side from a web based flex application. When you use XML files in a Flex application and then compile it to upload it in the server, Flex Buidler generates a swf file with the xml data embedded. How should I do to have access to those XML files?? Thanks for your answers. Regards. BS_C3

    Read the article

  • opening and viewing a file in php

    - by Christian Burgos
    how do i open/view for editing an uploaded file in php? i have tried this but it doesn't open the file. $my_file = 'file.txt'; $handle = fopen($my_file, 'r'); $data = fread($handle,filesize($my_file)); i've also tried this but it wont work. $my_file = 'file.txt'; $handle = fopen($my_file, 'w') or die('Cannot open file: '.$my_file); $data = 'This is the data'; fwrite($handle, $data); what i have in mind is like when you want to view an uploaded resume,documents or any other ms office files like .docx,.xls,.pptx and be able to edit them, save and close the said file. edit: latest tried code... <?php // Connects to your Database include "configdb.php"; //Retrieves data from MySQL $data = mysql_query("SELECT * FROM employees") or die(mysql_error()); //Puts it into an array while($info = mysql_fetch_array( $data )) { //Outputs the image and other data //Echo "<img src=localhost/uploadfile/images".$info['photo'] ."> <br>"; Echo "<b>Name:</b> ".$info['name'] . "<br> "; Echo "<b>Email:</b> ".$info['email'] . " <br>"; Echo "<b>Phone:</b> ".$info['phone'] . " <hr>"; //$file=fopen("uploadfile/images/".$info['photo'],"r+"); $file=fopen("Applications/XAMPP/xamppfiles/htdocs/uploadfile/images/file.odt","r") or exit("unable to open file");; } ?> i am getting the error: Warning: fopen(Applications/XAMPP/xamppfiles/htdocs/uploadfile/images/file.odt): failed to open stream: No such file or directory in /Applications/XAMPP/xamppfiles/htdocs/uploadfile/view.php on line 17 unable to open file the file is in that folder, i don't know it wont find it.

    Read the article

  • Copied a file with winscp; only winscp can see it

    - by nilbus
    I recently copied a 25.5GB file from another machine using WinSCP. I copied it to C:\beth.tar.gz, and WinSCP can still see the file. However no other app (including Explorer) can see the file. What might cause this, and how can I fix it? The details that might or might not matter WinSCP shows the size of the file (C:\beth.tar.gz) correctly as 27,460,124,080 bytes, which matches the filesize on the remote host Neither explorer, cmd (command line prompt w/ dir C:\), the 7Zip archive program, nor any other File Open dialog can see the beth.tar.gz file under C:\ I have configured Explorer to show hidden files I can move the file to other directories using WinSCP If I try to move the file to Users/, UAC prompts me for administrative rights, which I grant, and I get this error: Could not find this item The item is no longer located in C:\ When I try to transfer the file back to the remote host in a new directory, the transfer starts successfully and transfers data The transfer had about 30 minutes remaining when I left it for the night The morning after the file transfer, I was greeted with a message saying that the connection to the server had been lost. I don't think this is relevant, since I did not tell it to disconnect after the file was done transferring, and it likely disconnected after the file transfer finished. I'm using an old version of WinSCP - v4.1.8 from 2008 I can view the file properties in WinSCP: Type of file: 7zip (.gz) Location: C:\ Attributes: none (Ready-only, Hidden, Archive, or Ready for indexing) Security: SYSTEM, my user, and Administrators group have full permissions - everything other than "special permissions" is checked under Allow for all 3 users/groups (my user, Administrators, SYSTEM) What's going on?!

    Read the article

  • C++, Ifstream opens local file but not file on HTTP Server

    - by fammi
    Hi, I am using ifstream to open a file and then read from it. My program works fine when i give location of the local file on my system. for eg /root/Desktop/abc.xxx works fine But once the location is on the http server the file fails to open. for eg http://192.168.0.10/abc.xxx fails to open. Is there any alternate for ifstream when using a URL address? thanks. part of the code where having problem: bool readTillEof = (endIndex == -1) ? true : false; // Open the file in binary mode and seek to the end to determine file size ifstream file ( fileName.c_str ( ), ios::in|ios::ate|ios::binary ); if ( file.is_open ( ) ) { long size = (long) file.tellg ( ); long numBytesRead; if ( readTillEof ) { numBytesRead = size - startIndex; } else { numBytesRead = endIndex - startIndex + 1; } // Allocate a new buffer ptr to read in the file data BufferSptr buf (new Buffer ( numBytesRead ) ); mpStreamingClientEngine->SetResponseBuffer ( nextRequest, buf ); // Seek to the start index of the byte range // and read the data file.seekg ( startIndex, ios::beg ); file.read ( (char *)buf->GetData(), numBytesRead ); // Pass on the data to the SCE // and signal completion of request mpStreamingClientEngine->HandleDataReceived( nextRequest, numBytesRead); mpStreamingClientEngine->MarkRequestCompleted( nextRequest ); // Close the file file.close ( ); } else { // Report error to the Streaming Client Engine // as unable to open file AHS_ERROR ( ConnectionManager, " Error while opening file \"%s\"\n", fileName.c_str ( ) ); mpStreamingClientEngine->HandleRequestFailed( nextRequest, CONNECTION_FAILED ); } }

    Read the article

  • How do I open a file in such a way that if the file doesn't exist it will be created and opened automatically?

    - by snakile
    Here's how I open a file for writing+ : if( fopen_s( &f, fileName, "w+" ) !=0 ) { printf("Open file failed\n"); return; } fprintf_s(f, "content"); If the file doesn't exist the open operation fails. What's the right way to fopen if I want to create the file automatically if the file doesn't already exist? EDIT: If the file does exist, I would like fprintf to overwrite the file, not to append to it.

    Read the article

  • Problem with File uplolad in javascript.

    - by Nikhil
    I have used javascript to upload more than one file. when user clicks on 'add more' javascript appends new object to older div using innerHTML. Now the problem is if I select a file and then click on "add more" then new file button exist but older selected file removes and two blank file buttons display. I want this old file must be selected when user add new file button. If anybody can, Help Plz!!! tnX.

    Read the article

  • Write, Read and Update Oracle CLOBs with PL/SQL

    - by robertphyatt
    Fun with CLOBS! If you are using Oracle, if you have to deal with text that is over 4000 bytes, you will probably find yourself dealing with CLOBs, which can go up to 4GB. They are pretty tricky, and it took me a long time to figure out these lessons learned. I hope they will help some down-trodden developer out there somehow. Here is my original code, which worked great on my Oracle Express Edition: (for all examples, the first one writes a new CLOB, the next one Updates an existing CLOB and the final one reads a CLOB back) CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS    lob_loc  CLOB; BEGIN    SELECT CLOBHOLDERDDOC INTO lob_loc    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id;    p_clob := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1)); END; / As you can see, I had originally been casting everything back and forth between RAW formats using the UTL_RAW.CAST_TO_VARCHAR2() and UTL_RAW.CAST_TO_RAW() functions all over the place, but it had the nasty side effect of working great on my Oracle express edition on my developer box, but having all the CLOBs above a certain size display garbage when read back on the Oracle test database server . So...I kept working at it and came up with the following, which ALSO worked on my Oracle Express Edition on my developer box:   CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (     p_document      IN VARCHAR2,     p_id        OUT NUMBER) IS       lob_loc CLOB; BEGIN     INSERT INTO TBL_CLOBHOLDERDOC (CLOBHOLDERDOC)         VALUES (empty_CLOB())         RETURNING CLOBHOLDERDOC, CLOBHOLDERDOCID INTO lob_loc, p_id;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document);   END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (     p_document      IN VARCHAR2,     p_id        IN NUMBER) IS       lob_loc CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc FROM TBL_CLOBHOLDERDOC     WHERE CLOBHOLDERDOCID = p_id FOR UPDATE;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (     p_id IN NUMBER,     p_clob OUT VARCHAR2) IS     lob_loc  CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc     FROM   TBL_CLOBHOLDERDOC     WHERE  CLOBHOLDERDOCID = p_id;     p_clob := DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1); END; / Unfortunately, by changing my code to what you see above, even though it kept working on my Oracle express edition, everything over a certain size just started truncating after about 7950 characters on the test server! Here is what I came up with in the end, which is actually the simplest solution and this time worked on both my express edition and on the database server (note that only the read function was changed to fix the truncation issue, and that I had Oracle worry about converting the CLOB into a VARCHAR2 internally): CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS BEGIN    SELECT CLOBHOLDERDDOC INTO p_clob    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id; END; /   I hope that is useful to someone!

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • manage a hosted file by email

    - by Toc
    I need a service which allows me to edit a hosted text file by email. For example: I write an email to [email protected] with subject "ADD LINE 10" and sending it the mail body is inserted into myfile which is hosted on a server of someservice.ext. Same for "DELETE LINE 12" or "SUBSTITUTE LINE 15". If I write "ECHO FILE" or something similar, the service should send me an email with the updated content of the text file. Does it exist?

    Read the article

  • excel cannot open the file xxx.xlsx' because the file format is not valid error

    - by Yavuz
    I have difficulty open opening word and excel files suddenly. Only particular office file give me the problem. These files were previously scanned by combo fix and I believe they were damaged. The error response that I from office is Excel cannot open the file xxx.xlsx because the file format is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file. This is for excel and a similar kind of error response comes for word. The file looks fine. I mean the size vise... Please help me with this problem. I really appreciate your help and time....

    Read the article

  • Upon clicking on a file, excel opens but not the file itself

    - by william
    Platform: Windows XP SP2, Excel 2007 Problem description: Upon clicking on a file in Windows Explorer (file is either .xls or .xlsx) Excel 2007 opens, but does not open the file itself. I need either to click on a file again in Windows Explorer or open it manually with File/Open ... from Excel. Does anyone know what could cause this rather strange behaviour ? The old versions of Excel worked "normally" ... i.e. upon clicking on a file, an Excel would open along with the file. Please, help !

    Read the article

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • rename files with the same name

    - by snorpey
    Hi. I use the following function to rename thumbnails. For example, if I upload a file called "image.png" to an upload folder, and this folder already has a file named "image.png" in it, the new file automatically gets renamed to "image-copy-1.png". If there also is a file called "image-copy-1.png" it gets renamed to "image-copy-2.png" and so on. The following function returns the new filename. At least that's what it is supposed to do... The renaming doesn't seeem to work correctly, though. Sometimes it produces strange results, like: 1.png 1-copy-1.png 1-copy-2.png 1-copy-2-copy-1.png 1-copy-2-copy-3.png I hope you understand my problem, despite my description being somewhat complex... Can you tell me what went wrong here? (bonus question: Is regular expressions the right tool for doing this kind of stuff?) <?php function renameDuplicates($path, $file) { $fileName = pathinfo($path . $file, PATHINFO_FILENAME); $fileExtension = "." . pathinfo($path . $file, PATHINFO_EXTENSION); if(file_exists($path . $file)) { $fileCopy = $fileName . "-copy-1"; if(file_exists($path . $fileCopy . $fileExtension)) { if ($contains = preg_match_all ("/.*?(copy)(-)(\\d+)/is", $fileCopy, $matches)) { $copyIndex = $matches[3][0]; $fileName = substr($fileCopy, 0, -(strlen("-copy-" . $copyIndex))) . "-copy-" . ($copyIndex + 1); } } else { $fileName .= "-copy-1"; } } $returnValue = $fileName . $fileExtension; return $returnValue; }?>

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >