Search Results

Search found 24117 results on 965 pages for 'write'.

Page 14/965 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • write c++ in latex, noob latex question

    - by voodoomsr
    maybe is a noob question but i can't find the solution in the web, i need to write C++ in Latex. I write C$++$ but the result is like crap, the signs are too big and there is too much space between C and the first plus sign. Previously i needed to write the sharp symbol for C#....c$\sharp$ it also looks like crap but with a escape character it looks nice, for the plus sign i can't do the same.

    Read the article

  • How does DataContractSerializer write to private fields?

    - by Eric
    I understand how XMLSerializer could work by using reflection to figure out what public read/write fields or properties it should be using to serialize or de-serialize XML. Yet XMLSerializer requires that the fields be public and read/write. However, DataContractSerializer is able to read or write to or from completely private fields in a class. So I'm wondering how this is even possible with out explicitly giving DataContractSerializer additional access rights to my class(es).

    Read the article

  • String search and write into file in jython

    - by kdev
    hi Everyone , i wish to write a program that can read a file and if a particular str_to_find is found in a bigger string say AACATGCCACCTGAATTGGATGGAATTCATGCGGGACACGCGGATTACACCTATGAGCAGAAATACGGCCTGCGCGATTACCGTGGCGGTGGACGTTCTTCCGCGCGTGAAACCGCGATGCGCGTAGCGGCAGGGGCGATCGCCAAGAAATACCTGGCGGAAAAGTTCGGCATCGAAATCCGCGGCTGCCTGACCCAGATGGGCGACATTCCGCTGGAGATTAAAGACTGGCGTCAGGTTGAGCTTAATCCGTTTTC then write that line and the above line of it into the file and keep repeating it for all the match found. Please suggest i have written the program for printing that particular search line but i dont know how to write the above line. Thanks everyone for your help. import re import string file=open('C:/Users/Administrator/Desktop/input.txt','r') output=open('C:/Users/Administrator/Desktop/output.txt','w') count_record=file.readline() str_to_find='AACCATGC' while count_record: if string.find(list,str_to_find) ==0: output.write(count_record) file.close() output.close()

    Read the article

  • Using Java PDFBox library to write Russian PDF

    - by Brad
    Hello , I am using a Java library called PDFBox trying to write text to a PDF. It works perfect for English text, but when i tried to write Russian text inside the PDF the letters appeared so strange. It seems the problem is in the font used, but i am not so sure about that, so i hope if anyone could guide me through this. Here is the important code lines : PDTrueTypeFont font = PDTrueTypeFont.loadTTF( pdfFile, new File( "fonts/VREMACCI.TTF" ) ); // Windows Russian font imported to write the Russian text. font.setEncoding( new WinAnsiEncoding() ); // Define the Encoding used in writing. // Some code here to open the PDF & define a new page. contentStream.drawString( "??????? ????????????" ); // Write the Russian text. The WinAnsiEncoding source code is : Click here --------------------- Edit on 18 November 2009 After some investigation, i am now sure it is an Encoding problem, this could be solved by defining my own Encoding using the helpful PDFBox class called DictionaryEncoding. I am not sure how to use it, but here is what i have tried until now : COSDictionary cosDic = new COSDictionary(); cosDic.setString( COSName.getPDFName("Ercyrillic"), "0420 " ); // Russian letter. font.setEncoding( new DictionaryEncoding( cosDic ) ); This does not work, as it seems i am filling the dictionary in a wrong way, when i write a PDF page using this it appears blank. The DictionaryEncoding source code is : Click here Thanks . . .

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • How do i write this jpql query?

    - by Nitesh Panchal
    Hello, Say i have 5 tables, tblBlogs tblBlogPosts tblBlogPostComment tblUser tblBlogMember BlogId BlogPostsId BlogPostCommentId UserId BlogMemberId BlogTitle BlogId CommentText FirstName UserId PostTitle BlogPostsId BlogId BlogMemberId Now i want to retrieve only those blogs and posts for which blogMember has actually commented. So in short, how do i write this plain old sql :- Select b.BlogTitle, bp.PostTitle, bpc.CommentText from tblBlogs b Inner join tblBlogPosts bp on b.BlogId = bp.BlogId Inner Join tblBlogPostComment bpc on bp.BlogPostsId = bpc.BlogPostsId Inner Join tblBlogMember bm On bpc.BlogMemberId = bm.BlogMemberId Where bm.UserId = 1; As you can see, everything is Inner join, so only that row will be retrieved for which the user has commented on some post of some blog. So, suppose he has joined 3 blogs whose ids are 1,2,3 (The blogs which user has joined are in tblBlogMembers) but the user has only commented in blog 2 (of say BlogPostId = 1). So that row will be retrieved and 1,3 won't as it is Inner Join. How do i write this kind of query in jpql? In jpql, we can only write simple queries like say :- Select bm.blogId from tblBlogMember Where bm.UserId = objUser; Where objUser is supplied using :- em.find(User.class,1); Thus once we get all blogs(Here blogId represents a blog object) which user has joined, we can loop through and do all fancy things. But i don't want to fall in this looping business and write all this things in my java code. Instead, i want to leave that for database engine to do. So, how do i write the above plain sql into jpql? and what type of object the jpql query will return? because i am only selecting few fields from all table. In which class should i typecast the result to? I think i posted my requirement correctly, if i am not clear please let me know. Thanks in advance :).

    Read the article

  • Write simple data to iphone sandbox?

    - by fuzzygoat
    I want to write a small bit of data from my app to the iphone so I can load it when the app next starts. I am going to write the data using NSCoding, but I don't know what I should be specifying as a path. I understand I would write the data to the application sandbox, just not sure how to specify that. gary

    Read the article

  • How do i write this jpql query? java

    - by Nitesh Panchal
    Hello, Say i have 5 tables, tblBlogs tblBlogPosts tblBlogPostComment tblUser tblBlogMember BlogId BlogPostsId BlogPostCommentId UserId BlogMemberId BlogTitle BlogId CommentText FirstName UserId PostTitle BlogPostsId BlogId BlogMemberId Now i want to retrieve only those blogs and posts for which blogMember has actually commented. So in short, how do i write this plain old sql :- Select b.BlogTitle, bp.PostTitle, bpc.CommentText from tblBlogs b Inner join tblBlogPosts bp on b.BlogId = bp.BlogId Inner Join tblBlogPostComment bpc on bp.BlogPostsId = bpc.BlogPostsId Inner Join tblBlogMember bm On bpc.BlogMemberId = bm.BlogMemberId Where bm.UserId = 1; As you can see, everything is Inner join, so only that row will be retrieved for which the user has commented on some post of some blog. So, suppose he has joined 3 blogs whose ids are 1,2,3 (The blogs which user has joined are in tblBlogMembers) but the user has only commented in blog 2 (of say BlogPostId = 1). So that row will be retrieved and 1,3 won't as it is Inner Join. How do i write this kind of query in jpql? In jpql, we can only write simple queries like say :- Select bm.blogId from tblBlogMember Where bm.UserId = objUser; Where objUser is supplied using :- em.find(User.class,1); Thus once we get all blogs(Here blogId represents a blog object) which user has joined, we can loop through and do all fancy things. But i don't want to fall in this looping business and write all this things in my java code. Instead, i want to leave that for database engine to do. So, how do i write the above plain sql into jpql? and what type of object the jpql query will return? because i am only selecting few fields from all table. In which class should i typecast the result to? I think i posted my requirement correctly, if i am not clear please let me know. Thanks in advance :).

    Read the article

  • .NET Single Line Logging (ala Trace.Write/WriteLine) using Instrumentation.Logging

    - by KnownColor
    Hello Everyone, My question is whether it is possible to get line/multiline (very unsure of correct term for this) behaviour of the Trace.Write and Trace.WriteLine methods but using the Microsoft Instrumentation Logging framework in .NET 2.0. Desired Output Hello World! Oh Hai. What I Currently Have Trace.Write("Hello "); Trace.WriteLine("World!"); Trace.Write("Oh Hai."); I would prefer to use instrumentation to log rather than writing to a log file using Debug.Trace. EDIT: By Instrumentation Logging I mean using a 'loggingConfiguration' block in my App.config and writing Log Entries using using Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write(LogEntry logEntry); Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.FlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=2.0.0.0 for example. Ta, KnownColor

    Read the article

  • Writing/Reading struct w/ dynamic array through pipe in C

    - by anrui
    I have a struct with a dynamic array inside of it: struct mystruct{ int count; int *arr; }mystruct_t; and I want to pass this struct down a pipe in C and around a ring of processes. When I alter the value of count in each process, it is changed correctly. My problem is with the dynamic array. I am allocating the array as such: mystruct_t x; x.arr = malloc( howManyItemsDoINeedToStore * sizeof( int ) ); Each process should read from the pipe, do something to that array, and then write it to another pipe. The ring is set up correctly; there's no problem there. My problem is that all of the processes, except the first one, are not getting a correct copy of the array. I initialize all of the values to, say, 10 in the first process; however, they all show up as 0 in the subsequent ones. for( j = 0; j < howManyItemsDoINeedToStore; j++ ){ x.arr[j] = 10; } Initally: 10 10 10 10 10 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 Now, if I alter my code to, say, struct mystruct{ int count; int arr[10]; }mystruct_t; everything is passed correctly down the pipe, no problem. I am using READ and WRITE, in C: write( STDOUT_FILENO, &x, sizeof( mystruct_t ) ); read( STDIN_FILENO, &x, sizeof( mystruct_t ) ); Any help would be appreciated. Thanks in advance!

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • Cannot write to SD card -- canWrite is returning false

    - by Fizz
    Sorry for the ambiguous title but I'm doing the following to write a simple string to a file: try { File root = Environment.getExternalStorageDirectory(); if (root.canWrite()){ System.out.println("Can write."); File def_file = new File(root, "default.txt"); FileWriter fw = new FileWriter(def_file); BufferedWriter out = new BufferedWriter(fw); String defbuf = "default"; out.write(defbuf); out.flush(); out.close(); } else System.out.println("Can't write."); }catch (IOException e) { e.printStackTrace(); } But root.canWrite() seems to be returning false everytime. I am not running this off of an emulator, I have my android Eris plugged into my computer via USB and running the app off of my phone via Eclipse. Is there a way of giving my app permission so this doesn't happen? Also, this code seems to be create the file default.txt but what if it already exists, will it ignore the creation and just open it to write or do I have to catch something like FileAlreadyExists(if such an exception exists) which then just opens it and writes? Thanks for any help guys.

    Read the article

  • how to check the read write status of storing media in python

    - by mukul sharma
    Hi All, How can i check the read/ write permission of the file storing media? ie assume i have to write some file inside a directory and that directory may be available on read only media like (cd or dvd)or etc. So how can i check that storing media ( cd, hard disk) having a read only or read write both permission. I am using windows xp os. Thanks.

    Read the article

  • Read and Write in the same file with different process

    - by muruga
    I have written the two program. One program is write the content to the text file simultaneously. Another program is read that content simultaneously. But both the program should run at the same time. For me the program is write the file is correctly. But another program is not read the file. I know that once the write process is completed than only the data will be stored in the hard disk. Then another process can read the data. But I want both read and write same time with different process in the single file. How can I do that? Please help me.

    Read the article

  • Abnormally disconnected TCP sockets and write timeout

    - by James
    Hello I will try to explain the problem in shortest possible words. I am using c++ builder 2010. I am using TIdTCPServer and sending voice packets to a list of connected clients. Everything works ok untill any client is disconnected abnormally, For example power failure etc. I can reproduce similar disconnect by cutting the ethernet connection of a connected client. So now we have a disconnected socket but as you know it is not yet detected at server side so server will continue to try to send data to that client too. But when server try to write data to that disconnected client ...... Write() or WriteLn() HANGS there in trying to write, It is like it is wating for somekind of Write timeout. This hangs the hole packet distribution process as a result creating a lag in data transmission to all other clients. After few seconds "Socket Connection Closed" Exception is raised and data flow continues. Here is the code try { EnterCriticalSection(&SlotListenersCriticalSection); for(int i=0;i<SlotListeners->Count;i++) { try { //Here the process will HANG for several seconds on a disconnected socket ((TIdContext*) SlotListeners->Objects[i])->Connection->IOHandler->WriteLn("Some DATA"); }catch(Exception &e) { SlotListeners->Delete(i); } } }__finally { LeaveCriticalSection(&SlotListenersCriticalSection); } Ok i already have a keep alive mechanism which disconnect the socket after n seconds of inactivity. But as you can imagine, still this mechnism cant sync exactly with this braodcasting loop because this braodcasting loop is running almost all the time. So is there any Write timeouts i can specify may be through iohandler or something ? I have seen many many threads about "Detecting disconnected tcp socket" but my problem is little different, i need to avoid that hangup for few seconds during the write attempt. So is there any solution ? Or should i consider using some different mechanism for such data broadcasting for example the broadcasting loop put the data packet in some kind of FIFO buffer and client threads continuously check for available data and pick and deliver it to themselves ? This way if one thread hangs it will not stop/delay the over all distribution thread. Any ideas please ? Thanks for your time and help. Regards Jams

    Read the article

  • extra new lines with several outputStream.write

    - by Sam
    Hi All, I am writing jsp to export data in excel format to user. An excel could be recieved on the cient side. However, since there's large amount of data, and I don't want to keep it in the server memory and write them at the end. I try to divide them and write serveral times. However, each extra write(..) will cause an extra new lines at the top of the excel worksheet and then the extra data is placed after these new lines. Does anyone know the reasons? The code is something like this: response.setHeader("Content-disposition","attachment;filename=DocuShareSearch.xls"); response.setHeader("Content-Type", "application/octet-stream"); responseContent ="<table><tr><td>12131</td></tr>......."; byte[] responseByte1 = responseContent.getBytes("utf-16"); outputStream.write(responseByte1, 0, responseByte1.length ); responseContent =".....<tr><td>12131</td></tr></table>"; byte[] responseByte2 = responseContent.getBytes("utf-16"); outputStream.write(responseByte2, 0, responseByte2.length ); outputStream.close();

    Read the article

  • How often should we write unit tests?

    - by Midnight Blue
    Hi, I am recently introduced to the test-driven approach to development by my mentor at work, and he encourages me to write an unit-test whenenver "it makes sense." I understand some benefits of having a throughout unit-test suite for both regression testing and refractoring, but I do wonder how often and how throughout we should write unit-test. My mentor/development lead asks me to write a new unit test-case for a newly written control flow in a method that is already being tested by the exsisting test class, and I think it is an overkill. How often do you write your unit tests, and how detailed do you think your unit tests should be? Thanks!

    Read the article

  • Java: file write on finalize method

    - by sowrov
    In my understanding a singleton object will destroy only when the application is about to terminate. So in C++ I write a Singleton class to log my application and in that Singleton logger's destructor I log the time when my application was terminated. Things worked perfectly in C++. Now I want to have that same logger in Java, as in java there is no destructor so I implemented the finalize method for that singleton logger. But it seem that finalize method actually never get called. So, I add that System.runFinalizersOnExit(true); line, somewhere in my code (though I know it is deprecated) and that finalize method get called every time before termination of the app. But still there is a problem! If I try to write anything on file in that finalize method, It does not work, though System.out work without any problem! :( Can you guys help me on this problem? Here is a sample code of what I am try to do: Singleton Logger Class: public class MyLogger { FileWriter writer; private MyLogger() { try { this.writer = new FileWriter("log.txt"); } catch (IOException ex) { } } public static MyLogger getInstance() { return MyLoggerHolder.INSTANCE; } private static class MyLoggerHolder { private static final MyLogger INSTANCE = new MyLogger(); } @Override protected void finalize () { try { super.finalize(); System.out.println("Here"); //worked correctly. this.writer.write(new Date().toString()+System.getProperty("line.separator")); this.writer.write("End"); this.writer.flush(); //does not work! this.writer.close(); } catch (Throwable ex) { } } public synchronized void log(String str) { try { this.writer.write(new Date().toString()+System.getProperty("line.separator")); this.writer.write(str+"\n"); this.writer.flush(); } catch (IOException ex) { } } } Main: public class Main { public static void main(String[] args) { System.runFinalizersOnExit(true); MyLogger logger = MyLogger.getInstance(); logger.log("test"); } }

    Read the article

  • How to buffer stdout in memory and write it from a dedicated thread

    - by NickB
    I have a C application with many worker threads. It is essential that these do not block so where the worker threads need to write to a file on disk, I have them write to a circular buffer in memory, and then have a dedicated thread for writing that buffer to disk. The worker threads do not block any more. The dedicated thread can safely block while writing to disk without affecting the worker threads (it does not hold a lock while writing to disk). My memory buffer is tuned to be sufficiently large that the writer thread can keep up. This all works great. My question is, how do I implement something similar for stdout? I could macro printf() to write into a memory buffer, but I don't have control over all the code that might write to stdout (some of it is in third-party libraries). Thoughts? NickB

    Read the article

  • Rapid Opening and Closing System.IO.StreamWriter in C#

    - by ccomet
    Suppose you have a file that you are programmatically logging information into with regards to a process. Kinda like your typical debug Console.WriteLine, but due to the nature of the code you're testing, you don't have a console to write onto so you have to write it somewhere like a file. My current program uses System.IO.StreamWriter for this task. My question is about the approach to using the StreamWriter. Is it better to open just one StreamWriter instance, do all of the writes, and close it when the entire process is done? Or is it a better idea to open a new StreamWriter instance to write a line into the file, then immediately close it, and do this for every time something needs to be written in? In the latter approach, this would probably be facilitated by a method that would do just that for a given message, rather than bloating the main process code with excessive amounts of lines. But having a method to aid in that implementation doesn't necessarily make it the better choice. Are there significant advantages to picking one approach or the other? Or are they functionally equivalent, leaving the choice on the shoulders of the programmer?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >