Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 82/799 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Multiple Unpacking Assignment in Python when you don't know the sequence length

    - by doug
    The textbook examples of multiple unpacking assignment are something like: import numpy as NP M = NP.arange(5) a, b, c, d, e = M # so of course, a = 0, b = 1, etc. M = NP.arange(20).reshape(5, 4) # numpy 5x4 array a, b, c, d, e = M # here, a = M[0,:], b = M[1,:], etc. (ie, a single row of M is assigned each to a through e) (My Q is not numpy specfic; indeed, i would prefer a pure python solution.) W/r/t the piece of code i'm looking at now, i see two complications on that straightforward scenario: i usually won't know the shape of M; and i want to unpack a certain number of items (definitely less than all items) and i want to put the remainder into a single container so back to the 5x4 array above, what i would very much like to be able to do is, for instance, assign the first three rows of M to a, b, and c respectively (exactly as above) and the rest of the rows (i have no idea how many there will be, just some positive integer) to a single container, all_the_rest = []. I'm not sure if i have explained this clearly; in any event, if i get feedback i'll promptly edit my Question.

    Read the article

  • Possible to migrate from non-RAID to RAID 1 and then RAID 5?

    - by stueng
    Using software RAID only Is it possible to start with a 2TB disk full of data and safely add it to a RAID 1 array? Is it then possible to add a third disk and migrate the RAID 1 array into a RAID 5 array? OR Is it possible to start with a 2 disk degraded RAID 5 array and then add the third disk later to create a health RAID 5 array? Backstory: I wish to migrate from a 2 disk NAS (RAID 1) to a 3 disk NAS and only purchase one new disk in doing so

    Read the article

  • Emulate disk IO speeds in Android SDK

    - by Ben L.
    I don't have a physical Android device to work with, so all I can use is the emulator. I'm wondering if there's something I could use to make the IO speeds more realistic - How do I slow down disk access to the speed it would be on a physical device? Also, this may be unrelated, but when I change the speed and latency options in Eclipse ADT DDMS view, I don't notice any change in internet speed on the emulator. Is this a bug?

    Read the article

  • How to LIMIT the text box length?

    - by sam
    I am working on eclipse java using swt jface with rcp. How can i limit the characters in my text box. Like if i want only 4 characters in my text box then what should i do? and what if i want alphanumeric combination . again in certain limit?

    Read the article

  • Optimal password salt length

    - by Juliusz Gonera
    I tried to find the answer to this question on Stack Overflow without any success. Let's say I store passwords using SHA-1 hash (so it's 160 bits) and let's assume that SHA-1 is enough for my application. How long should be the salt used to generated password's hash? The only answer I found was that there's no point in making it longer than the hash itself (160 bits in this case) which sounds logical, but should I make it that long? E.g. Ubuntu uses 8-byte salt with SHA-512 (I guess), so would 8 bytes be enough for SHA-1 too or maybe it would be too much?

    Read the article

  • substitution cypher with different alphabet length

    - by seanizer
    I would like to implement a simple substitution cypher to mask private ids in URLs I know how my IDs will look like (combination of upperchase ascii, digits and underscore), and they will be rather long, as they are composed keys. I would like to use a longer alphabet to shorten the resulting codes (I'd like to use upper and lower case ascii letters, digits and nothing else). So my incoming alphabet would be [A-Z0-9_] (37 chars) and my outgoing alphabet would be [A-Za-z0-9] (62 chars) so a compression of almost 50% would be available. let's say my URLs look like this: /my/page/GFZHFFFZFZTFZTF_24_F34 and I want them to look like this instead: /my/page/Ft32zfegZFV5 Obviously both arrays would be shuffled to bring some random order in. This does not have to be secure. if someone figures it out: fine, but I don't want the scheme to be obvious. My desired solution would be to convert the string to an integer representation of radix 37, convert the radix to 62 and use the second alphabet to write out that number. is there any sample code available that does something similar? Integer.parseInt ( http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#parseInt%28java.lang.String,%20int%29 ) has some similar logic, but it is hard-coded to use standard digit behavior Any hints? I am using java to implement this but code or pseudo-code in any other language is of course also helpful

    Read the article

  • split a result list from mysql into separate lists based on list length

    - by liz
    i have a list of returned rows from mysql that i am outputting using php: echo '<ul class="mylist">'; foreach ($rows as $row) { echo '<li><a href="'.$row->url.'" target="_blank">' . $row->title . '</a></li>'; } echo "</ul>"; works fine, but its a long list and i would like to split it into ul chunks so that i can make columns. maybe like 5 results per ul. instead of one ul... i tried wrapping in a for statement but then just wound up outputting the results 5 times...oops...

    Read the article

  • Linq to sql Incorrect varchar length

    - by scott
    I have a table with a nullable varchar(50) column in it. When I am updating the value through linq to sql and trace the call in profiler it is defining the parameter as varchar(36). This is obviously causing some minor issues when we are trying to insert data that is between 37 and 50 characters long. I have tried removing the table and re-adding it to the design surface but the same thing happens. I also tried removing that property and adding it manually, same issue. When I look at the designer.cs code it shows the attribute properly: [Column(Storage="_Name", DbType="VarChar(50)")] I am out of ideas, anybody seen this before? Every other column is correct.

    Read the article

  • Mathematica: get number of arguments passed to a function?

    - by dbjohn
    How do I get the number of arguments passed to a function, such as Plus[2,3,4,5] has 4 arguments passed to it. I was thinking it may involve the use of the function Length and getting the arguments into a list. The intention is to iterate an operation based on the number of arguments for a function. There is probably a simple solution or function but I haven't come across it yet. Any other ways or suggestions are welcome as well?

    Read the article

  • Writing direct to disk with php

    - by Jurander
    I would like to create an upload script that doesn't fall under the php upload limit. There might be an occasion where I need to upload a 2GB, or larger file and I don't want to have to change the whole server execution to above 32MB. Is there a way to write direct to disk from php? What method might you propose someone would use to accomplish this? I have read around stack overflow but haven't quite found what I am looking to do.

    Read the article

  • Using Pisa to write a pdf to disk

    - by phoebebright
    I have pisa producing .pdfs in django in the browser fine, but what if I want to automatically write the file to disk? What I want to do is to be able to generate a .pdf version file at specified points in time and save it in a uploads directory, so there is no browser interaction. Is this possible?

    Read the article

  • How to sort TBB concurrent_vector or concurrent_queue?

    - by Jackie
    Now I have a solver in that I need to keep a set of self-defined data type objects in a concurrent_vector or queue. It has to be concurrent because the objects come from different threads.With this concurrent container, I hope to sort these objects, eliminate duplicates and send them back when other threads need them. However, I know TBB offers concurrent_vector and concurrent_queue which can be read and written concurrently from different threads. But how to sort the objects inside a container? Does everyone know how to do that? Thanks.

    Read the article

  • How to query MySQL for exact length and exact UTF-8 characters

    - by oskarae
    I have table with words dictionary in my language (latvian). CREATE TABLE words ( value varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; And let's say it has 3 words inside: INSERT INTO words (value) VALUES ('teja'); INSERT INTO words (value) VALUES ('vejš'); INSERT INTO words (value) VALUES ('feja'); What I want to do is I want to find all words that is exactly 4 characters long and where second character is 'e' and third character is 'j' For me it feels that correct query would be: SELECT * FROM words WHERE value LIKE '_ej_'; But problem with this query is that it returs not 2 entries ('teja','vejš') but all three. As I understand it is because internally MySQL converts strings to some ASCII representation? Then there is BINARY addition possible for LIKE SELECT * FROM words WHERE value LIKE BINARY '_ej_'; But this also does not return 2 entries ('teja','vejš') but only one ('teja'). I believe this has something to do with UTF-8 2 bytes for non ASCII chars? So question: What MySQL query would return my exact two words ('teja','vejš')? Thank you in advance

    Read the article

  • Posting Static & Variable length data to the Server with JQUERY & Coldfusion

    - by nobosh
    I'm looking to post the following type of data to a server using JQUERY & Coldfusion: foodID - int foodDESC - text html text from a WYSIWYG (CKEDITOR) --there will always be just 1 foodID and foodDESC per POST to the server but there can be a variable number of: locationID - int LocationDesc - text --There can be 0-8+ of these. Should I send this in one post or multiple posts? If one post, what's the smartest way to do this? I'm never had to deal with this much data before. Thank you

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • C++: sizeof for array length

    - by Rosarch
    Let's say I have a macro called LengthOf(array): sizeof array / sizeof array[0] When I make a new array of size 23, shouldn't I get 23 back for LengthOf? WCHAR* str = new WCHAR[23]; str[22] = '\0'; size_t len = LengthOf(str); // len == 4 Why does len == 4? UPDATE: I made a typo, it's a WCHAR*, not a WCHAR**.

    Read the article

  • read length of string from stdin

    - by teoz
    I want to take a string from stdin but I don't want a static array of fixed size I know that scanf need something where save the stdin input, but I can't do something like this: char string[10] scanf("%s",string); becouse I need to know before how long will be the string in order to allocate the right memory space. Can you help me to resolve this problem?

    Read the article

  • Change paper size in the middle of a latex document?

    - by Usagi
    Does anyone know how to change these length parameters in the middle of a latex document? \paperwidth \paperheight I would like to define a page size for a single page (possibly two or three). I tried v5.3 of the geometry package, which just added some new features; like \newgeometry. Unfortunately \newgeometry cannot be used to redefine \paperheight and \paperwidth. Any help would be very appreciated.

    Read the article

  • Google App Engine - About how much quota does a single datastore put use?

    - by Spines
    The latency for a datastore put is about 150ms (http://code.google.com/status/appengine/detail/datastore/2010/03/11#ae-trust-detail-datastore-put-latency). About how much CPUTime is used by a single datastore put with data size of 100 bytes, into an entity that has only 2 columns, and no indexes? I plan to do some testing with this later today to figure it out, but if anyone already knows that would help me out :). Also, does anyone know about how much extra overhead in CPUTime doing this datastore put through the task queue would be? Note: This is kind of a follow up to this question: http://stackoverflow.com/questions/2421075/google-app-engine-how-reliable-are-the-logs.

    Read the article

  • Determine an elements position in a variable length grid of elements

    - by gaoshan88
    I have a grid of a variable number of elements. Say 5 images per row and a variable number of images. I need to determine which column (for lack of a better word) each image is in... i.e. 1, 2, 3, 4 or 5. In this grid, images 1, 6, 12 and 17 would be in column 1 while 4, 9 and 15 would be in column 4. 1 2 3 4 5 6 7 8 9 10 12 13 14 15 16 17 18 19 What I am trying to do is apply a background image to each element based on it's column position. An example of this hard coded and inflexible (and if I'm barking up the wrong tree here by all means tell me how you'd ideally accomplish this as it always bugs me when I see someone ask "How do I build a gold plated, solar powered jet pack to get to the top of this building?" when they really should be asking "Where's the elevator?"): switch (imgnum){ case "1" : case "6" : case "11" : value = "1"; break; case "2" : case "7" : case "12" : value = "2"; break; case "3" : case "8" : case "13" : value = "3"; break; case "4" : case "9" : case "14" : value = "4"; break; case "5" : case "10" : case "15" : value = "5"; break; default : value = ""; } $('.someclass > ul').css('background','url("/img/'+value+'.png") no-repeat');

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >