Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 11/67 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Writing to EEPROM on PIC

    - by JB
    Are there any PIC microcontroller programmers here? I'm learning some PIC microcontroller programming using a pickit2 and the 16F690 chip that came with it. I'm working through trying out the various facilities at the moment. I can sucessfully read a byte from the EEPROM in code if I set the EEPROM vaklue in MPLAB but I don't seem to be able to modify the value using the PIC itsself. Simply nothing happens and I don't read back the modified value, I always get the original which implies to me that the write isn't working? This is my code for that section, am I missing something? I know I'm doing a lot of unnecessary bank switches, I added most of them to ensure that being on the wrong bank wasn't the issue. ; ------------------------------------------------------ ; Now SET the EEPROM location ZERO to 0x08 ; ------------------------------------------------------ BANKSEL EEADR CLRF EEADR ; Set EE Address to zero BANKSEL EEDAT MOVLW 0x08 ; Store the value 0x08 in the EEPROM MOVWF EEDAT BANKSEL EECON1 BSF EECON1, WREN ; Enable writes to the EEPROM BANKSEL EECON2 MOVLW 0x55 ; Do the thing we have to do so MOVWF EECON2 ; that writes can work MOVLW 0xAA MOVWF EECON2 BANKSEL EECON1 BSF EECON1, WR ; And finally perform the write WAIT BTFSC EECON1, WR ; Wait for write to finish GOTO WAIT BANKSEL PORTC ; Just to make sure we are on the right bank

    Read the article

  • Synchronizing ASP.NET MVC action methods with ReaderWriterLockSlim

    - by James D
    Any obvious issues/problems/gotchas with synchronizing access (in an ASP.NET MVC blogging engine) to a shared object model (NHibernate, but it could be anything) at the Controller/Action level via ReaderWriterLockSlim? (Assume the object model is very large and expensive to build per-request, so we need to share it among requests.) Here's how a typical "Read Post" action would look. Enter the read lock, do some work, exit the read lock. public ActionResult ReadPost(int id) { // ReaderWriterLockSlim allows multiple concurrent writes; this method // only blocks in the unlikely event that some other client is currently // writing to the model, which would only happen if a comment were being // submitted or a new post were being saved. _lock.EnterReadLock(); try { // Access the model, fetch the post with specificied id // Pseudocode, etc. Post p = TheObjectModel.GetPostByID(id); ActionResult ar = View(p); return ar; } finally { // Under all code paths, we must release the read lock _lock.ExitReadLock(); } } Meanwhile, if a user submits a comment or an author authors a new post, they're going to need write access to the model, which is done roughly like so: [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveComment(/* some posted data */) { // try/finally omitted for brevity _lock.EnterWriteLock(); // Save the comment to the DB, update the model to include the comment, etc. _lock.ExitWriteLock(); } Of course, this could also be done by tagging those action methods with some sort of "synchronized" attribute... but however you do it, my question is is this a bad idea? ps. ReaderWriterLockSlim is optimized for multiple concurrent reads, and only blocks if the write lock is held. Since writes are so infrequent (1000s or 10,000s or 100,000s of reads for every 1 write), and since they're of such a short duration, the effect is that the model is synchronized , and almost nobody ever locks, and if they do, it's not for very long.

    Read the article

  • PHP sessions causing Apache to hang indefinitely

    - by Kmaid
    The problem is that every so often a page that writes to a Session will cause apache to hang forever for a particular session. Once this error occurs for one user any further modifications to any session of any user will cause the website to hang for this user. This problem has been my sole focus for days. I have a development VPS running Windows 2003 and default latest version of XAMPP using the standard PHP session handler. The code in question actually runs on two other machines perfectly normally so although my common sense says it’s a web server configuration issue but at this point I am willing to try anything. On further investigation there are no errors in the Apache, PHP or System event log. Resources are abundant and there is no “AJAX shit storm” or more than a couple writes to a session per page. I have also implemented session_write_close() wherever possible to try and help elevate the problem. I have checked the session’s directory which is set to “C:\windows\Temp” and found that once a user enters this hanging phase that the corresponding session file is exclusively locked and the only way to resolve this is to stop Apache and wait a few moments for the files to become unlocked and delete them. I am not wondering if deletion is required. The Sessions themselves only contain 4 bits of information. ShoppingCartID, UserID, UserLevel and Refering URL and are alphanumerical with an occasional slash. My PHP.INI’s session section is configured like this: session.save_handler = files session.save_path = "C:\WINDOWS\Temp" session.use_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 100 session.gc_maxlifetime = 1440 session.bug_compat_42 = 1 session.bug_compat_warn = 1 session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 4 I have tried everything I can think of and the whole problem is now a blur to me. Any ideas would be appreciated and thanks for your time reading this :)

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • Sorting a list in PHP

    - by Andy
    Alright, so I've just (almost) finished my first little php script. Basically all it does is handling forms. It writes whatever the user put in to the fields of the form to a text file, then I include that text file in the index of the little page I have set up. So, currently it writes to the beginning of the text file (but doesn't overwrite previous data). But, several users wants this list to be alphabetically sorted instead. So what I want to do is make whatever they submit fall into the list in alphabetical order. The thing here is also that all I use in the text file are divs. The list is basically 'divided' into 3 parts. 'Title', 'Link', and 'Poster'. All I have done is positioned them with css. So, can I sort this list (the titles, in this case) alphabetically and still have the 'link' and 'poster' information assigned the way they already are, but just with the titles sorted? It don't use databases at all on my site, so there is no database handling at all used in this script (mainly because I'm not experienced at all in this).

    Read the article

  • How to get the output of an XslCompiledTransform into an XmlReader?

    - by Graham Clark
    I have an XslCompiledTransform object, and I want the output in an XmlReader object, as I need to pass it through a second stylesheet. I'm getting a bit confused - I can successfully transform some XML and read it using either a StreamReader or an XmlDocument, but when I try an XmlReader, I get nothing. In the example below, stylesheet is my XslCompiledTransform object. The first two Console.WriteLine calls output the correct transformed XML, but the third call gives no XML. I'm guessing it might be that the XmlTextReader is expecting text, so maybe I need to wrap this in a StreamReader..? What am I doing wrong? MemoryStream transformed = new MemoryStream(); stylesheet.Transform(input, args, transformed); transformed.Position = 0; StreamReader s = new StreamReader(transformed); Console.WriteLine("s = " + s.ReadToEnd()); // writes XML transformed.Position = 0; XmlDocument doc = new XmlDocument(); doc.Load(transformed); Console.WriteLine("doc = " + doc.OuterXml); // writes XML transformed.Position = 0; XmlReader reader = new XmlTextReader(transformed); Console.WriteLine("reader = " + reader.ReadOuterXml()); // no XML written

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Bluetooth to arduino

    - by user1833709
    My current project includes an Arduino Uno attached via USB to a laptop. The laptop is there is receive messages via bluetooth (I realize I could connect a bluetooth modem directly to the arduino but this project has to come in under budget). What I'd ideally like to do is write a program that continually checks whether the laptop has been paired to a bluetooth transmitter (my phone) and sends this to the arduino. Do you think should write a program that checks the bluetooth state and writes a 0 or 1 to a text document, and then use something like Gobetwino to read from the text document? I could probably figure out how to continually read from the file, but I don't know enough about bluetooth to know how to check whether it's paired or not, computer-side. Something that writes directly to the arduino instead of a text document would be my preference, but I don't know where to start. I'm using an Android phone and Windows 7. If you need more info about the setup, just ask. Does anyone have any experience with this?

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Trouble with StreamReader

    - by John
    Alright so here is what I have so far, List<string> lines = new List<string>(); using (StreamReader r = new StreamReader(f)) { string line; while ((line = r.ReadLine()) != null) { lines.Add(line); } } foreach (string s in lines) { NetworkStream stream = irc.GetStream(); writer.WriteLine(USER); writer.Flush(); writer.WriteLine("NICK " + NICK); writer.Flush(); writer.WriteLine("JOIN " + s); writer.Flush(); string trimmedString = string.Empty; CHANNEL = s; } Unfortunately when my IRC dummy enters a room with a password set it writes out the password, if I make it change channel with a command such as #lol test test being the password, since CHANNEL = s; it writes out the password with the command writer.WriteLine("PRIVMSG " + CHANNEL + " :" + "Hello"); That is the only way to write out to IRC so is there a way for the "CHANNEL" to only be the start of the text and just #lol so it doesn't write out the password? I hope you understand my problem.

    Read the article

  • Wordpress paths issue

    - by Martin
    I have set a crawler up in wordpress which grabs stocks data and writes to file which when a user enters a symbol/ticker the variable is read and if it matches the data of a previous crawl for that particular companies data will echo the text file on page, if no data is found the crawler then sets off grabs it and writes to file to save for the next time that symbol is used. The problem im having is that everything works groovey apart from one thing, when the content is written to file it saves it in the WP root and not inside a subfolder of the theme, basicaly this means that root becomes untidy very quickly and also should the theme be used on another site then its not practical as some important info is missing. I have tried bloginfo and absolute both return the same failure. This is the code i am using to write to file, like i say it works apart from writing the file into root. <?php $CompDetails = "http://www.devserverurl.com/mattv1/wp-content/themes/stocks/tools/modules/Stock_Quote/company_details/$Symbol.txt"; if (file_exists($CompDetails)) {} else { include ('crawler_file.php'); $html = file_get_html("http://targeturl.com/research/stocks/snapshot/snapshot.asp?ticker=$Symbol:US"); $es = $html->find('div[class="detailsDataContainerLt"]'); $tickerdetails = ("$es[0]"); $FileHandle2 = fopen($CompDetails, 'w') or die("can't open file"); fwrite($FileHandle2, $tickerdetails); fclose($FileHandle2); } ?> edit below, have also tried this and the same happens as above <?php if (file_exists($_SERVER['DOCUMENT_ROOT'] . "/wp-content/themes/stocks/tools/modules/Stock_Quote/company_details/$Symbol.txt")) {} else { include ('crawler_file.php'); $html = file_get_html("http://targeturl.com/research/stocks/snapshot/snapshot.asp?ticker=$Symbol:US"); $es = $html->find('div[class="detailsDataContainerLt"]'); $tickerdetails = ("$es[0]"); $FileHandle2 = fopen($_SERVER['DOCUMENT_ROOT'] . "/wp-content/themes/stocks/tools/modules/Stock_Quote/company_details/$Symbol.txt", 'w') or die("can't open file"); fwrite($FileHandle2, $tickerdetails); fclose($FileHandle2); } ?>

    Read the article

  • Efficient (basic) regular expression implementation for streaming data

    - by Brendan Dolan-Gavitt
    I'm looking for an implementation of regular expression matching that operates on a stream of data -- i.e., it has an API that allows a user to pass in one character at a time and report when a match is found on the stream of characters seen so far. Only very basic (classic) regular expressions are needed, so a DFA/NFA based implementation seems like it would be well-suited to the problem. Based on the fact that it's possible to do regular expression matching using a DFA/NFA in a single linear sweep, it seems like a streaming implementation should be possible. Requirements: The library should try to wait until the full string has been read before performing the match. The data I have really is streaming; there is no way to know how much data will arrive, it's not possible to seek forward or backward. Implementing specific stream matching for a couple special cases is not an option, as I don't know in advance what patterns a user might want to look for. For the curious, my use case is the following: I have a system which intercepts memory writes inside a full system emulator, and I would like to have a way to identify memory writes that match a regular expression (e.g., one could use this to find the point in the system where a URL is written to memory). I have found (links de-linkified because I don't have enough reputation): stackoverflow.com/questions/1962220/apply-a-regex-on-stream stackoverflow.com/questions/716927/applying-a-regular-expression-to-a-java-i-o-stream www.codeguru.com/csharp/csharp/cs_data/searching/article.php/c14689/Building-a-Regular-Expression-Stream-Search-with-the-NET-Framework.htm But all of these attempt to convert the stream to a string first and then use a stock regular expression library. Another thought I had was to modify the RE2 library, but according to the author it is architected around the assumption that the entire string is in memory at the same time. If nothing's available, then I can start down the unhappy path of reinventing this wheel to fit my own needs, but I'd really rather not if I can avoid it. Any help would be greatly appreciated!

    Read the article

  • Writing a generic function that can take a Writer as well as an OutputStream

    - by ebruchez
    I wrote a couple of functions that look like this: def myWrite(os: OutputStream) = {} def myWrite(w: Writer) = {} Now both are very similar and I thought I would try to write a single parametrized version of the function. I started with a type with the two methods that are common in the Java OutputStream and Writer: type Writable[T] = { def close() : Unit def write(cbuf: Array[T], off: Int, len: Int): Unit } One issue is that OutputStream writes Byte and Writer writes Char, so I parametrized the type with T. Then I write my function: def myWrite[T, A[T] <: Writable[T]](out: A[T]) = {} and try to use it: val w = new java.io.StringWriter() myWrite(w) Result: <console>:9: error: type mismatch; found : java.io.StringWriter required: ?A[ ?T ] Note that implicit conversions are not applicable because they are ambiguous: both method any2ArrowAssoc in object Predef of type [A](x: A)ArrowAssoc[A] and method any2Ensuring in object Predef of type [A](x: A)Ensuring[A] are possible conversion functions from java.io.StringWriter to ?A[ ?T ] myWrite(w) I tried a few other combinations of types and parameters, to no avail so far. My question is whether there is a way of achieving this at all, and if so how. (Note that the implementation of myWrite will need, internally, to know the type T that parametrizes the write() method, because it needs to create a buffer as in new ArrayT.)

    Read the article

  • Picture.writeToStream() not writing out all bitmaps

    - by quickdraw mcgraw
    I'm using webview.capturePicture() to create a Picture object that contains all the drawing objects for a webpage. I can successfully render this Picture object to a bitmap using the canvas.drawPicture(picture, dst) with no problems. However when I use picture.writeToStream(fos) to serialize the picture object out to file, and then Picture.createFromStream(fis) to read the data back in and create a new picture object, the resultant bitmap when rendered as above is missing any larger images (anything over around 20KB! by observation). This occurs on all the Android OS platforms that I have tested 1.5, 1.6 and 2.1. Looking at the native code for Skia which is the underlying Android graphics library and the output file produced from the picture.writeToStream() I can see how the file format is constructed. I can see that some of the images in this Skia spool file are not being written out (the larger ones), the code that appears to be the problem is in skBitmap.cpp in the method void SkBitmap::flatten(SkFlattenableWriteBuffer& buffer) const; It writes out the bitmap fWidth, fHeight, fRowBytes, FConfig and isOpaque values but then just writes out SERIALIZE_PIXELTYPE_NONE (0). This means that the spool file does not contain any pixel information about the actual image and therefore cannot restore the picture object correctly. Effectively this renders the writeToStream and createFromStream() APIs useless as they do not reliably store and recreate the picture data. Has anybody else seen this behaviour and if so am I using the API incorrectly, can it be worked around, is there an explanation i.e. incomplete API / bug and if so are there any plans for a fix in a future release of Android? Thanks in advance.

    Read the article

  • Why doesn't `stdin.read()` read entire buffer?

    - by Shookie
    I've got the following code: def get_input(self): """ Reads command from stdin, returns its JSON form """ json_string = sys.stdin.read() print("json string is: "+json_string) json_data =json.loads(json_string) return json_data It reads a json string that was sent to it from another process. The json is read from stdin. For some reason I get the following output: json string is: <Some json here> json string is: Traceback (most recent call last): File "/Users/Matan/Documents/workspace/ProjectSH/addonmanager/addon_manager.py", line 63, in <module> manager.accept_commands() File "/Users/Matan/Documents/workspace/ProjectSH/addonmanager/addon_manager.py", line 49, in accept_commands json_data = self.get_input() File "/Users/Matan/Documents/workspace/ProjectSH/addonmanager/addon_manager.py", line 42, in get_input json_data =json.loads(json_string) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 365, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 383, in raw_decode raise ValueError("No JSON object could be decoded") So for some reason it reads an empty string from stdin instead of reading only the json. I've checked, and the code that writes to this process's stdin writes to it only once. What's wrong here?

    Read the article

  • How to drop/restart a client connection to a flaky socket server

    - by Steve Prior
    I've got Java code which makes requests via sockets from a server written in C++. The communication is simply that the java code writes a line for a request and the server then writes a line in response, but sometimes it seems that the server doesn't respond, so I've implemented an ExecutorService based timeout which tries to drop and restart the connection. To work with the connection I keep: Socket server; PrintWriter out; BufferedReader in; so setting up the connection is a matter of: server = new Socket(hostname, port); out = new PrintWriter(new OutputStreamWriter(server.getOutputStream())); in = new BufferedReader(new InputStreamReader(socket.getInputStream())); If it weren't for the timeout code the request/receive steps are simply: String request="some request"; out.println(request); out.flush(); String response = in.readLine(); When I received a timeout I was trying to close the connection with: in.close(); out.close(); socket.close(); but one of these closes seems to block and never return. Can anyone give me a clue why one of these would block and what would be the proper way to abandon the connection and not end up with resource leak issues?

    Read the article

  • How to conduct an interview for a development position remotely?

    - by sharptooth
    Usually we run interviews in office. We have a room with a table, the interviewee and one or two interviewers sit at the table, interviewers ask questions, often accompanied with code snippets on paper, the interviewee (hopefully) answers them, writes code snippets to illustrate his point. Usually it's something like an interviewer writes about five lines of C++ code and asks some specific question - quite a little code. Now we need to do the same remotely. We will be in our office and the interviewee will be far away - we are asked to help hire a person for another office located abroad. Of course we can use some technology for voice calls, but I'm afraid it's the most we can count on. I see a whole set of obstacles here: how to write illustration code snippets and exchange them efficiently? what to do to compensate for the fact that we're not native English speakers and the interviewees might or might be not native English speakers (I'm afraid this can make conversation significantly harder)? Are there any best practices for this situation? How could we address the obstacles listed? What other things should we consider to run the interview most efficiently?

    Read the article

  • Java: GatheringByteChannel advantages?

    - by Jason S
    I'm wondering when the GatheringByteChannel's write methods (taking in an array of ByteBuffers) have advantages over the "regular" WritableByteChannel write methods. I tried a test where I could use the regular vs. the gathering write method on a FileChannel, with approx 400KB/sec total in ByteBuffers of between 23-27 bytes in length in both cases. Gathering writes used an array of 64. The regular method used up approx 12% of my CPU, and the gathering method used up approx 16% of my CPU (worse than the regular method!) This tells me it's NOT useful to use gathering writes on a FileChannel around this range of operating parameters. Why would this be the case, and when would you ever use GatheringByteChannel? (on network I/O?) Relevant differences here: public void log(Queue<Packet> packets) throws IOException { if (this.gather) { int Nbuf = 64; ByteBuffer[] bbufs = new ByteBuffer[Nbuf]; int i = 0; Packet p; while ((p = packets.poll()) != null) { bbufs[i++] = p.getBuffer(); if (i == Nbuf) { this.fc.write(bbufs); i = 0; } } if (i > 0) { this.fc.write(bbufs, 0, i); } } else { Packet p; while ((p = packets.poll()) != null) { this.fc.write(p.getBuffer()); } } }

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • Monitoring physical RAM errors on Linux

    - by user40157
    I would like to monitor the ram of two linux systems (Ubuntu and Red Hat). I realize I can run memtest86 from boot to diagnose bad ram. But are there are any solutions to monitor ram while the system is still running. I'm sort of thinking a daemon that writes and reads back from random unused memory. Anybody seen something like this before?

    Read the article

  • ram monitoring on linux

    - by user40157
    I would like to monitor the ram of two linux systems (Ubuntu and Red Hat). I realize I can run memtest86 from boot to diagnose bad ram. But are there are any solutions to monitor ram while the system is still running. I'm sort of thinking a daemon that writes and reads back from random unused memory. Anybody seen something like this before?

    Read the article

  • alternative filesystems for SSD

    - by freedrull
    I am tired of watching fsck check my filesystem when my eeepc 901 shuts down abruptly due to a crash. I know that with a journaling filesystem, I won't have to wait for a check. However, I am well aware of the poor I/O performance of the SSD, so I can imagine using a journaling filesystem being even more frustrating, since there will be constant writes to the journal? I will buy a new laptop without such a crummy ssd someday but, is there anything I can do now, on the software side of things?

    Read the article

  • Adaptec 5805 after reboot don't starting

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not included. It only works if the computer is shut down and turn off. Late i update firmware "Adaptec RAID 5805 Firmware Build 18948" How to fix the problem? add Log Configuration summary Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priorityHigh Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1 List item

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >