Search Results

Search found 30085 results on 1204 pages for 'read only'.

Page 220/1204 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • double_t in C99

    - by yCalleecharan
    Hi, I just read that C99 has double_t which should be at least wide as double. Does this imply that it gives more precision digits after the decimal place? More than the usual 15 digits for double?. Secondly, how to use it: Is only including #include enough? I read that one has to set the FLT_EVAL_METHOD to 2 for long double. How to do this? As I work with numerical methods, I would like maximum precision without using an arbitrary precision library. Thanks a lot...

    Read the article

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • How Should I Print Documentation from Google Code?

    - by peter.newhook
    Google does a decent job of documenting their API (like Closure http://code.google.com/closure/compiler/docs/overview.html) but I find it hard to read because it's broken into such short pages. I like to leaf through my docs and read it on paper. Has anyone found a good way to print from the documentation on Google Code. It could be a PDF, or even just a long page with lots of content. Please note, I'm not talking about the Wikis in the Open Source side of Google Code. I'm referring to the API docs published by Google.

    Read the article

  • shutil.rmtree fails on Windows with 'Access is denied'

    - by Sridhar Ratnakumar
    In Python, when running shutil.rmtree over a folder that contains a read-only file, the following exception is printed: File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 221, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\ActivePython32Python26\lib\shutil.py", line 219, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'build\\pyhg_trunk-win32-x86-hgtip27\\image\\feature-core\\INSTALLDIR\\tcl\\tcl8.5\\msgs\\af.msg' Looking in File Properties dialog I noticed that af.msg file is set to be read-only. So the question is: what is the simplest workaround/fix to get around this problem - given that my intention is to do an equivalent of rm -rf build/ but on Windows? (without having to use unxutils or cygwin)

    Read the article

  • Experiences with "language converters"?

    - by Friedrich
    I have read a few articles mentioning converters from one language to another. I'm a bit more than skeptical about the use of such kind of tools. Does anyone know or have experiences let's say about Visual Basic to Java or vs converters? Just one example to pick http://www.tvobjects.com/products/products.html, claims to be the "world leader" or so in that aspect, However if read this: http://dev.mysql.com/tech-resources/articles/active-grid.html There the author states: "The consensus of MySQL users is that automated conversion tools for MS Access do not work. For example, tools that translate existing Access applications to Java often result in 80% complete solutions where finishing the last 20% of the work takes longer than starting from scratch." Well we know we need 80% of the time to implement the first 80% functionality and another 80% of the time for the other 20 %.... So has anyone tried such tools and found them to be worthwhile?

    Read the article

  • How do I memoize expensive calculations on Django model objects?

    - by David Eyk
    I have several TextField columns on my UserProfile object which contain JSON objects. I've also defined a setter/getter property for each column which encapsulates the logic for serializing and deserializing the JSON into python datastructures. The nature of this data ensures that it will be accessed many times by view and template logic within a single Request. To save on deserialization costs, I would like to memoize the python datastructures on read, invalidating on direct write to the property or save signal from the model object. Where/How do I store the memo? I'm nervous about using instance variables, as I don't understand the magic behind how any particular UserProfile is instantiated by a query. Is __init__ safe to use, or do I need to check the existence of the memo attribute via hasattr() at each read? Here's an example of my current implementation: class UserProfile(Model): text_json = models.TextField(default=text_defaults) @property def text(self): if not hasattr(self, "text_memo"): self.text_memo = None self.text_memo = self.text_memo or simplejson.loads(self.text_json) return self.text_memo @text.setter def text(self, value=None): self.text_memo = None self.text_json = simplejson.dumps(value)

    Read the article

  • [Java] Flood fill using a stack

    - by dafero
    Hello to everyone :), I am using the recursive Flood fill algorithm in Java to fill some areas of a image. With very small images it works fine, but when de image becomes larger the JVM gives me a Stack Over Flow Error. That's the reason why I have to reimplement the method using a Flood Fill with my own stack. (I read that's the best way to do it in these kind of cases) Can anyone explain me how to code it? (if you don't have the code at hand, with the pseudo-code of the algorithm will be fine) I've read a lot in the Internet but I haven't understood it very well. Thanks!

    Read the article

  • Can't get max_post_size php variable set in lunar pages.

    - by Behrooz Karjooravary
    I need to increase max post size and upload size for php to use the audio module of drupal. I read this has to be set in php.ini. However I don't think I have access to that file in lunar pages. I also read it can also be set in .htaccess. However it doesn't change anything. I tried: php_value post_max_size "40M" php_value upload_max_filesize "40M" i also tried: php_value post_max_size 40M php_value upload_max_filesize 40M On localhost it says restart webserver. But this is not possible on shared host. Could that be the problem?

    Read the article

  • Get last n lines of a file with Python, similar to tail

    - by Armin Ronacher
    I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom. So I need a tail() method that can read n lines from the bottom and supports an offset. What I came up with looks like this: def tail(f, n, offset=0): """Reads a n lines from f with an offset of offset lines.""" avg_line_length = 74 to_read = n + offset while 1: try: f.seek(-(avg_line_length * to_read), 2) except IOError: # woops. apparently file is smaller than what we want # to step back, go to the beginning instead f.seek(0) pos = f.tell() lines = f.read().splitlines() if len(lines) >= to_read or pos == 0: return lines[-to_read:offset and -offset or None] avg_line_length *= 1.3 Is this a reasonable approach? What is the recommended way to tail log files with offsets?

    Read the article

  • FoxPro to C#: What best method between ODBC, OLE DB or another?

    - by Martin Labelle
    We need to read data from FoxPro 8 with C#. I'm gonna do some operations, and will push some of thoses data to an SQL Server database. We are not sure what's best method to read those data. I saw OLE DB and ODBC; what's best? REQUIRMENTS: The export program will run each night, but my company runs 24h a day. The DBF could sometimes be huge. We DON'T need to modify data. Our system, wich use FoxPro, is quite unstable: I need to find a way that ABSOLUTELY do not corrupt data, and, ideally, do not lock DBF files while reading. Speed is a minor requirement: it must be quick, but requirement #4 is most important.

    Read the article

  • Processing incoming email

    - by Mike Schots
    How do I programmatically read an incoming email with .NET. I need a method to take the content of email message (in this case XML) on a POP server and read it into my application. Ideally this could be solved by: .NET code that I can run as a background task on my webserver to process the email. A service that can POST the contents of the email to my webserver. Although I'm open to other options. EDIT: Copied from the "non-answer": Currently, we're looking at using the email accounts our hosting company provides. Our own mail server could potentially be an option, but its something the we'd need to contact the host about.

    Read the article

  • Unable to retrieve the complete description string of the event log record

    - by Santosh Pillai
    Hi All, I have an MFC application that reads and displays event log records using the ::ReadEventLog() API. The problem is with reading the "Description" message string of the event log record. The MFC application is unable to read the complete "Description" message string and displays only some part of it. However the Windows System Event Log Viewer reads and displays the complete "Description" message string correctly. I have ensured that my MFC application reads the entire "Description" message string by retrieving all the strings as provided by the "NumStrings" and "StringOffset" member variables of the EVENTLOGRECORD structure and merging all of them. Also as mentioned in MSDN my application loads the Source Name specific message library file (whose path is specified in the registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application[SourceName]) that further contains additional message string information and merges it with the earlier read strings. I am still unable to get the entire "Description" message string. Please provide any help towards resolving the issue. Regards, Santosh.

    Read the article

  • java: how to get a string representation of a compressed byte array ?

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into a compressed file 'data.zip' server send a string representation of data.zip to the repository repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool The problem arise at step 3 when I try to get a string representation of my compressed file. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • Dealing with UIImagePickerController to minimize memory useage

    - by Gordon Fontenot
    So, I have read the SO post on UIImagePickerController, UIImage, Memory and More, and I read the post on Memory Leak Problems with UIImagePickerController in iPhone. I have VASTLY increased my memory efficiency between these 2 posts, and I thank the OPs and the people that provided the answers. I just had a question on the answer provided in the Memory Leak question, which was (essentially): only have one instance of the controller throughout the programs runtime What would be the best way to go about this without causing memory leaks? Right now I am initiating it and releasing it on every use from within the view, and I am seeing exactly what the answer describes (Memory warnings and a crash after about 20 uses). Should I initiate the UIImagePickerController when I need it, but use a seperate class unrelated to the view to control it? How should I deal with releasing the controller if I do it this way?

    Read the article

  • Domain Driven Design And The Entity Framework

    - by Hossein
    hi, I'm new to DDD and i want to use entity framework v4.0 (shipped with .net 4.0) in my new project. since i have few time to learn DDD and entity framework, which books are good for me to read first?! i'm going to first read Domain-Driven Design Tackling Complexity in the Heart of Software by Eric Evan and next Pro Entity Framework 4.0 (Apress Scott Klein)... the problem here is Eric Evan's book is too abstract,I want to know if the Pro Entity framework 4.0 is complete enough i just skip the first book! in the end recommend me some good books in DDD and Entity Framework Thanks

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • Printing elements out of list

    - by chavanak
    Hi, I have a certain check to be done and if the check satisfies, I want the result to be printed. Below is the code: import string import codecs import sys y=sys.argv[1] list_1=[] f=1.0 x=0.05 write_in = open ("new_file.txt", "w") write_in_1 = open ("new_file_1.txt", "w") ligand_file=open( y, "r" ) #Open the receptor.txt file ligand_lines=ligand_file.readlines() # Read all the lines into the array ligand_lines=map( string.strip, ligand_lines ) #Remove the newline character from all the pdb file names ligand_file.close() ligand_file=open( "unique_count_c_from_ac.txt", "r" ) #Open the receptor.txt file ligand_lines_1=ligand_file.readlines() # Read all the lines into the array ligand_lines_1=map( string.strip, ligand_lines_1 ) #Remove the newline character from all the pdb file names ligand_file.close() s=[] for i in ligand_lines: for j in ligand_lines_1: j = j.split() if i == j[1]: print j The above code works great but when I print j, it prints like ['351', '342'] but I am expecting to get 351 342 (with one space in between). Since it is more of a python question, I have not included the input files (basically they are just numbers). Can anyone help me? Cheers, Chavanak

    Read the article

  • Processing potentially large STDIN data, more than once

    - by d11wtq
    I'd like to provide an accessor on a class that provides an NSInputStream for STDIN, which may be several hundred megabytes (or gigabytes, though unlikely, perhaps) of data. When I caller gets this NSInputStream it should be able to read from it without worrying about exhausting the data it contains. In other words, another block of code may request the NSInputStream and will expect to be able to read from it. Without first copying all of the data into an NSData object which (I assume) would cause memory exhaustion, what are my options for handling this? The returned NSInputStream does not have to be the same instance, it simply needs to provide the same data. The best I can come up with right now is to copy STDIN to a temporary file and then return NSInputStream instances using that file. Is this pretty much the only way to handle it? Is there anything I should be cautious of if I go the temporary file route?

    Read the article

  • Outlook VSTO AddIn Configuration

    - by Deepak N
    I'm working on VSTO addin for outlook 2003.Outlook can read the startup section from Outlook.exe.config. <startup> <supportedRuntime version="v1.0.3705" /> <supportedRuntime version="v1.1.4322" /> <supportedRuntime version="v2.0.50727" /> </startup> But it is not able to read the system.diagnostics section of the config file. Basically i'm trying add trace listeners as i have explained here.Am I missing any thing here.

    Read the article

  • How can I install ruby on rails with rvm?

    - by yeahthatguyrightthere
    I've been searching around on how to install it through the terminal on my mac. I'm using snow leopard. When I use the command: rvm install 1.9.3 I've also followed the other procedures that led me up to this to install, right now the current version is 1.8.3 Error running './configure --prefix="/Users/jose/.rvm/usr" ', please read /Users/jose/.rvm/log/ruby-1.9.3-p125/yaml/configure.log Then it mentions something about xcode and autoreconf was not found in the PATH. Error running 'patch -F 25 -p1 -N -f <"/Users/jose/.rvm/patches/ruby/1.9.3/p125/xcode-debugopt-fix-e34840.diff"',please read /Users/jose/.rvm/log/ruby-1.9.3-p125/patch.apply.xcode-debugopt-fix-r34840.log rvm requires autoreconf to install the selected ruby interpreter however autoreconf was not found in the PATH I been trying for awhile now, and found out i need to have Xcode for snow leopard which I cannot find. So my last option will be to upgrade to lion but I don't know about upgrading. Kind of scared to upgrade and everything becomes buggy.

    Read the article

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • Reading Binary data from a Serial Port.

    - by rross
    I previously have been reading NMEA data from a GPS via a serial port using C#. Now I'm doing something similar, but instead of GPS from a serial. I'm attempting to read a KISS Statement from a TNC. I'm using this event handler. comport.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); Here is port_DataReceived. private void port_DataReceived(object sender, SerialDataReceivedEventArgs e) { string data = comport.ReadExisting(); sBuffer = data; try { this.Invoke(new EventHandler(delegate { ProcessBuffer(sBuffer); })); } catch { } } The problem I'm having is that the method is being called several times per statement. So the ProcessBuffer method is being called with only a partial statment. How can I read the whole statement?

    Read the article

  • IE form input data disappear after browser refresh

    - by RWW
    Hi, I'm trying to achieve sticky forms without PHP. My setup is AJAX like javascript. The back/forward work fine on both IE and FF, but refresh only works on FF, not IE. Doesn't matter what cache options I use, I've even set IE's temporary files option to never check for updates, and the input value is gone after page refresh(the refresh button or F5) I've read many posts where people have the opposite problem, and do not want form data to persist across page refresh, and never read from browser cache, but I do. Any help is appreciated, thanks!

    Read the article

  • dynamic array pointer to binary file

    - by Yijinsei
    Hi guys, Know this might be rather basic, but I been trying to figure out how to one after create a dynamic array such as double* data = new double[size]; be used as a source of data to be kept in to a binary file such as ofstream fs("data.bin",ios:binary"); fs.write(reinterpret_cast<const char *> (data),size*sizeof(double)); When I finish writing, I attempt to read the file through double* data = new double[size]; ifstream fs("data.bin",ios:binary"); fs.read(reinterpret_cast<char*> (data),size*sizeof(double)); However I seem to encounter a run time error when reading the data. Do you guys have any advice how i should attempt to write a dynamic array using pointers passed from other methods to be stored in binary files?

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >