Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 75/361 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • What is the most efficient way to solve system of equations containing the digamma function?

    - by Neil G
    What is the most efficient way to solve system of equations involving the digamma function? I have a vector v and I want to solve for a vector w such that for all i: digamma(sum(w)) - digamma(w_i) = v_i and w_i 0 I found the gsl function gsl_sf_psi, which is the digamma function (calculated using some kind of series.) Is there an identity I can use to reduce the equations? Is my best bet to use a solver? I am using C++0x; which solver is easiest to use and fast?

    Read the article

  • Most Efficient Way of calling an external webservice in Java?

    - by Sudheer
    In one of our applications we need to call the Yahoo Soap Webservice to Get Weather and other related info. I used the wsdl2java tool from axis1.4 and generated th required stubs and wrote a client. I use jsp's use bean to include the client bean and call methods defined in the client which call the yahoo webservice inturn. Now the problem: When users make calls to the jsp the response time of the webservice differs greatly, like for one user it took less then 10 seconds and the other in the same network took more than a minute. I was just wondering if Axis1.4 queues the requests even though the jsps are multithreaded. And finally is there an efficient way of calling the webservice(Yahoo weather). Typically i get around 200 simultaneous requests from my users.

    Read the article

  • What kind of storage with two-way replication for multi site C# application?

    - by twk
    Hi I have a web-based system written using asp.net backed by mssql. A synchronized replica of this system is to be run on mobile locations and must be available regardless of the state of the connection to the main system (few hours long interruptions happens). For now I am using a copy of the main web application and a copy of the mssql server with merge replication to the main system. This works unreliably, and setting the replication is a pain. The amount of data the system contains is not huge, so I can migrate to different storage type. For the new version of this system I would like to implement a new replication system. I am considering migration to db4o for storage with it's replication support. I am thinking about other possible solutions like couchdb which had native replication support. I would like to stay with C#. Could you recommend a way to go for such a distributed environment? PS. Master-Slave replication is not an option: any side must be allowed to add/update data.

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • Is there a efficient way to do multiple test cases in c?

    - by Ahmed Abdelaal
    I use MS Visual Studio and I am new to C++, so I am just wondering if there is an faster more efficient way to do multiple test cases instead of keep clicking CTRL+F5 and re-opening the console many times. Like for example if I have this code #include <iostream> using namespace std; void main () { int x; cout<<"Enter a number"<<endl; cin>>x; cout<<x*2<<endl; } Is there a way I could try different values of x at once and getting the results together? Thanks

    Read the article

  • Efficient multiple services exposed over c# .NET remoting using more channels or end points?

    - by wb
    I am using remoting over TCP for a prototype distributed server application where I want to have varying multiple services exposed from each remoting server process. In some cases I want the services running from the same process but I don't want whatever is using the service to care about that. I am wondering is it more efficient to have multiple services in the same process going over the same remoting channel distinguished by endpoint URI/URL or should I be creating new channels on different ports for each service in the same process? Using up ports isn't so much of a problem as the number of services will be low and the network and machine configuration is completely controlled. Also its not clear to me if remoting sends the URI string for every single message or just at connection time, and whether if the remoting framework is intelligent enough to reduce work if calls are made on the same machine and even the same process? Thanks in advance.

    Read the article

  • Why are difference lists more efficient than regular concatenation?

    - by Craig Innes
    I am currently working my way through the Learn you a haskell book online, and have come to a chapter where the author is explaining that some list concatenations can be ineffiecient: For example ((((a ++ b) ++ c) ++ d) ++ e) ++ f Is supposedly inefficient. The solution the author comes up with is to use 'difference lists' defined as newtype DiffList a = DiffList {getDiffList :: [a] -> [a] } instance Monoid (DiffList a) where mempty = DiffList (\xs -> [] ++ xs) (DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs)) I am struggling to understand why DiffList is more computationally efficient than a simple concatenation in some cases. Could someone explain to me in simple terms why the above example is so inefficient, and in what way the DiffList solves this problem?

    Read the article

  • Is converting this ArrayList to a Generic List efficient?

    - by Greg
    The code I'm writing receives an ArrayList from unmanaged code, and this ArrayList will always contain one or more objects of type Grid_Heading_Blk. I've considered changing this ArrayList to a generic List, but I'm unsure if the conversion operation will be so expensive as to nullify the benefits of working with the generic list. Currently, I'm just running a foreach (Grid_Heading_Blk in myArrayList) operation to work with the ArrayList contents after passing the ArrayList to the class that will use it. Should I convert the ArrayList to a generic typed list? And if so, what is the most efficient way of doing so?

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • Most efficient way to fetch and output Content with 2-Level Comments?

    - by awegawef
    I have some content with up to 2-levels of replies. I am wondering what the most efficient way to fetch and output the replies. I should note that I am planning on storing the comments with fields content_id and reply_to, where reply_to refers to which comment it is in reply to (if any). Any criticism on this design is welcome. In pseudo-code (ish), my first attempt would be: # in outputting content CONTENT_ID all_comments = fetch all comments where content_id == CONTENT_ID root_comments = filter all_comments with reply_to == None children_comments = filter all_comments with reply_to != None output_comments = list() for each root_comment children = filter children_comments, reply_to == root_comment.id output_coments.append( (root_comment, children) ) send output_comments to template Is this the best way to do this? Thanks in advance. Edit: On second thought, I'll want to preserve date-order on the comments, so I'll have to do this a bit differently, or at least just sort the comments afterward.

    Read the article

  • Are C++ exceptions sufficient to implement thread-local storage?

    - by Potatoswatter
    I was commenting on an answer that thread-local storage is nice and recalled another informative discussion about exceptions where I supposed The only special thing about the execution environment within the throw block is that the exception object is referenced by rethrow. Putting two and two together, wouldn't executing an entire thread inside a function-catch-block of its main function imbue it with thread-local storage? It seems to work fine: #include <iostream> #include <pthread.h> using namespace std; struct thlocal { string name; thlocal( string const &n ) : name(n) {} }; thlocal &get_thread() { try { throw; } catch( thlocal &local ) { return local; } } void print_thread() { cerr << get_thread().name << endl; } void *kid( void *local_v ) try { thlocal &local = * static_cast< thlocal * >( local_v ); throw local; } catch( thlocal & ) { print_thread(); return NULL; } int main() try { thlocal local( "main" ); throw local; } catch( thlocal & ) { print_thread(); pthread_t th; thlocal kid_local( "kid" ); pthread_create( &th, NULL, &kid, &kid_local ); pthread_join( th, NULL ); print_thread(); return 0; } Is this novel or well-characterized? Was my initial premise correct? What kind of overhead does get_thread incur in, say, GCC and VC++? It would require throwing only exceptions derived from struct thlocal, but altogether this doesn't feel like an unproductive insomnia-ridden Sunday morning…

    Read the article

  • What is the most efficient way to list all of the files in a directory (including sub-directories)?

    - by prometheus
    I am writing a servlet which will examine a directory on the server (external to the web container), and recursively search for certain files (by certain files, I mean files that are of a certain extension as well as a certain naming convention). Once these files are found, the servlet responds with a long list of all of the found files (including the full path to the files). My problem is that there are so many files and directories that my servlet goes extremely slow. I was wondering if there was a best practice or existing servlet for this type of problem? Would it be more efficient to simply compile the entire list of files and do the filtering via js/jquery on the client side?

    Read the article

  • How do you make your Java application memory efficient?

    - by Boune
    How do you optimize the heap size usage of an application that has a lot (millions) of long-lived objects? (big cache, loading lots of records from a db) Use the right data type Avoid java.lang.String to represent other data types Avoid duplicated objects Use enums if the values are known in advance Use object pools String.intern() (good idea?) Load/keep only the objects you need I am looking for general programming or Java specific answers. No funky compiler switch. Edit: Optimize the memory representation of a POJO that can appear millions of times in the heap. Use cases Load a huge csv file in memory (converted into POJOs) Use hibernate to retrieve million of records from a database Resume of answers: Use flyweight pattern Copy on write Instead of loading 10M objects with 3 properties, is it more efficient to have 3 arrays (or other data structure) of size 10M? (Could be a pain to manipulate data but if you are really short on memory...)

    Read the article

  • What strategies are efficient to handle concurrent reads on heterogeneous multi-core architectures?

    - by fabrizioM
    I am tackling the challenge of using both the capabilities of a 8 core machine and a high-end GPU (Tesla 10). I have one big input file, one thread for each core, and one for the the GPU handling. The Gpu thread, to be efficient, needs a big number of lines from the input, while the Cpu thread needs only one line to proceed (storing multiple lines in a temp buffer was slower). The file doesn't need to be read sequentially. I am using boost. My strategy is to have a mutex on the input stream and each thread locks - unlocks it. This is not optimal because the gpu thread should have a higher precedence when locking the mutex, being the fastest and the most demanding one. I can come up with different solutions but before rush into implementation I would like to have some guidelines. What approach do you use / recommend ?

    Read the article

  • Send data to webserver from C#, what's the most efficient way?

    - by Brian
    I am sending gps coordinates from a windows mobile phone to a webserver using a basic program I wrote in C#. The problem is the data plan on the phone only allows 4 MB per month. I was planning on updating the location every 10 seconds. Currently I am just creating a webrequest every 10 seconds to a php page on the server and the coordinates are passed over in the url, the php page saves them to the database. This generates about 1K of data per request, at this rate I will hit my data limit in less than a day. Is there a more efficient way to do this?

    Read the article

  • Python - Is there a better/efficient way to find a node in tree?

    - by Sej P
    I have a node data structure defined as below and was not sure the find_matching_node method is pythonic or efficient. I am not well versed with generators but think there might be better solution using them. Any ideas? class HierarchyNode(): def __init__(self, nodeId): self.nodeId = nodeId self.children = {} # opted for dictionary to help reduce lookup time def addOrGetChild(self, childNode): return self.children.setdefault(childNode.nodeId,childNode) def find_matching_node(self, node): ''' look for the node in the immediate children of the current node. if not found recursively look for it in the children nodes until gone through all nodes ''' matching_node = self.children.get(node.nodeId) if matching_node: return matching_node else: for child in self.children.itervalues(): matching_node = child.find_matching_node(node) if matching_node: return matching_node return None

    Read the article

  • How to code an efficient blacklist filter function in php?

    - by achairapart
    So, I have three arrays like this: [items] => Array ( [0] => Array ( [id] => someid [title] => sometitle [author] => someauthor ... ) ... ) and also a string with comma separated words to blacklist: $blacklist = "some,words,to,blacklist"; Now I need to match these words with (as they can be one of) id, title, author and show results accordingly. I was thinking of a function like this: $pattern = '('.strtr($blacklist, ",", "|").')'; // should return (some|words|etc) foreach ($items as $item) { if ( !preg_match($pattern,$item['id']) || !preg_match($pattern,$item['title']) || !preg_match($pattern,$item['author']) ) { // show item } } and I wonder if this is the most efficient way to filter the arrays or I should use something with strpos() or filter_var with FILTER_VALIDATE_REGEXP ... Note that this function is repeated per 3 arrays. However, each array will not contain more than 50 items.

    Read the article

  • What is the most efficient functional version of the following imperative code?

    - by justin.r.s.
    I'm learning Scala and I want to know the best way of expressing this imperative pattern using Scala's functional programming capabilities. def f(l: List[Int]): Boolean = { for (e <- l) { if (test(e)) return true } } return false } The best I can come up with is along the lines of: l map { e => test(e) } contains true But this is less efficient since it calls test() on each element, whereas the imperative version stops on the first element that satisfies test(). Is there a more idiomatic functional programming technique I can use to the same effect? The imperative version seems awkward in Scala.

    Read the article

  • .NET: efficient way to produce a string from a Dictionary<K,V> ?

    - by Cheeso
    Suppose I have a Dictionary<String,String>, and I want to produce a string representation of it. The "stone tools" way of doing it would be: private static string DictionaryToString(Dictionary<String,String> hash) { var list = new List<String> (); foreach (var kvp in hash) { list.Add(kvp.Key + ":" + kvp.Value); } var result = String.Join(", ", list.ToArray()); return result; } Is there an efficient way to do this in C# using existing extension methods? I know about the ConvertAll() and ForEach() methods on List, that can be used to eliminate foreach loops. Is there a similar method I can use on Dictionary to iterate through the items and accomplish what I want?

    Read the article

  • Which method of adding items to the ASP.NET Dictionary class is more efficient?

    - by ahmd0
    I'm converting a comma separated list of strings into a dictionary using C# in ASP.NET (by omitting any duplicates): string str = "1,2, 4, 2, 4, item 3,item2, item 3"; //Just a random string for the sake of this example and I was wondering which method is more efficient? 1 - Using try/catch block: Dictionary<string, string> dic = new Dictionary<string, string>(); string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { try { string s2 = s.Trim(); dic.Add(s2, s2); } catch { } } } 2 - Or using ContainsKey() method: string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { string s2 = s.Trim(); if (!dic.ContainsKey(s2)) dic.Add(s2, s2); } }

    Read the article

  • Efficient determination of which strings in an array are substrings of the others?

    - by byte
    In C#, Say you have an array of strings, which contain only characters '0' and '1': string[] input = { "0101", "101", "11", "010101011" }; And you'd like to build a function: public void IdentifySubstrings(string[] input) { ... } That will produce the following: "0101 is a substring of 010101011" "101 is a substring of 0101" "101 is a substring of 010101011" "11 is a substring of 010101011" And you are NOT able to use built-in string functionality (such as String.Substring). How would one efficiently solve this problem? Of course you could plow through it via brute force, but it just feels like there ought to be a way to accomplish it with a tree (since the only values are 0's and 1's, it feels like a binary tree ought to fit somehow). I've read a little bit about things like suffix trees, but I'm uncertain if that's the right path to be going down. Any efficient solutions you can think of?

    Read the article

  • Most efficient way to maintain a 'set' in SQL Server?

    - by SEVEN YEAR LIBERAL ARTS DEGREE
    I have ~2 million rows or so of data, each row with an artificial PK, and two Id fields (so: PK, ID1, ID2). I have a unique constraint (and index) on ID1+ID2. I get two sorts of updates, both with a distinct ID1 per update. 100-1000 rows of all-new data (ID1 is new) 100-1000 rows of largely, but not necessarily completely overlapping data (ID1 already exists, maybe new ID1+ID2 pairs) What's the most efficient way to maintain this 'set'? Here are the options as I see them: Delete all the rows with ID1, insert all the new rows (yikes) Query all the existing rows from the set of new data ID1+ID2, only insert the new rows Insert all the new rows, ignore inserts that trigger unique constraint violations Any thoughts?

    Read the article

  • What's the most efficient way to repeatedly remove leading text using Vim?

    - by John Topley
    What's the most efficient way to remove the text 2010-04-07 14:25:50,773 DEBUG This is a debug log statement - from a log file like the extract below using Vim? 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 9,8 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 1,11 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 5,2 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 8,4 This is what the result should look like: 9,8 1,11 5,2 8,4 Note that on this occasion I'm using gVim on Windows, so please don't suggest any UNIX programs which may be better suited to the task—I have to do it using Vim.

    Read the article

  • In the context of an asp.net website, what's the most efficient way to check whether a User has acce

    - by scaramouch
    I have a webpage that you pass in an id parameter (via a querystring), which it then uses to fetch data from a database. Typically, a user would navigate to this page from another page that lists only those records that the user has access to. However, if they go directly to the page by typing in the URL in the Address Bar, they can effectively view any record they like. Eg. If they were to type something like http://localhost/TestSite/ClientAdmin/ManageLocation.aspx?LocationID=5 into their Address Bar, they can access the database record with the LocationID equal to five - even though they shouldn't have access to it. Now, I could solve this by doing a database check every time the page is loaded to see whether the current user has access to the record they're trying to view. However this doesn't seem very efficient given that in most cases a user won't be trying to access a record that isn't theirs. Does anyone have a better suggestion? Thanks.

    Read the article

  • What is the most efficient way to pass data (list of pairs of [Integer + Double]) between two Google App Engine instances?

    - by ruslan
    What is the most efficient way to pass data (list of pairs of [Integer, Double]) between two Google App Engine instances ? Currently I use Java binary serialization. Frontend servlet receives data from the client in JSON format. I convert it to byte[] using ObjectOutput.writeObject and then send it to backend servlet via HTTP POST. It's not in production yet. Should I just pass client's JSON as it is to backend? It seems more logical. But it's bigger in size. Or should I use Google Protocol Buffers as stated in this benchmark article ? Thank you!!!

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >