Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 97/626 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Most efficient method to query a Young Tableau

    - by Matthieu M.
    A Young Tableau is a 2D matrix A of dimensions M*N such that: i,j in [0,M)x[0,N): for each p in (i,M), A[i,j] <= A[p,j] for each q in (j,N), A[i,j] <= A[i,q] That is, it's sorted row-wise and column-wise. Since it may contain less than M*N numbers, the bottom-right values might be represented either as missing or using (in algorithm theory) infinity to denote their absence. Now the (elementary) question: how to check if a given number is contained in the Young Tableau ? Well, it's trivial to produce an algorithm in O(M*N) time of course, but what's interesting is that it is very easy to provide an algorithm in O(M+N) time: Bottom-Left search: Let x be the number we look for, initialize i,j as M-1, 0 (bottom left corner) If x == A[i,j], return true If x < A[i,j], then if i is 0, return false else decrement i and go to 2. Else, if j is N-1, return false else increment j This algorithm does not make more than M+N moves. The correctness is left as an exercise. It is possible though to obtain a better asymptotic runtime. Pivot Search: Let x be the number we look for, initialize i,j as floor(M/2), floor(N/2) If x == A[i,j], return true If x < A[i,j], search (recursively) in A[0:i-1, 0:j-1], A[i:M-1, 0:j-1] and A[0:i-1, j:N-1] Else search (recursively) in A[i+1:M-1, 0:j], A[i+1:M-1, j+1:N-1] and A[0:i, j+1:N-1] This algorithm proceed by discarding one of the 4 quadrants at each iteration and running recursively on the 3 left (divide and conquer), the master theorem yields a complexity of O((N+M)**(log 3 / log 4)) which is better asymptotically. However, this is only a big-O estimation... So, here are the questions: Do you know (or can think of) an algorithm with a better asymptotical runtime ? Like introsort prove, sometimes it's worth switching algorithms depending on the input size or input topology... do you think it would be possible here ? For 2., I am notably thinking that for small size inputs, the bottom-left search should be faster because of its O(1) space requirement / lower constant term.

    Read the article

  • SAN cache memory upgrade

    - by Scott Lundberg
    We currently have an IBM DS4300 Dual Controller Fibre SAN. It is a good box, but getting pretty old. It came with 256MB of cache per controller. Recently we replaced the batteries in one of the controllers and noticed that the cache is a DDR PC2100 ECC DIMM. Of course, we are thinking about how cheap this RAM is now and is there any good reason we can't upgrade the RAM. IBM used to have a "Turbo" upgrade to this box that doubled the cache and had a bunch of software features for about 10K USD. Since that product has been end-of-lifed, I don't think we can get that upgrade and we don't need the software upgrades (FlashCopy, StorageCopy, etc). Besides the obvious potential warranty issue, what if any issues would we expect to see if attempting to put 2 - 1GB DIMMS in this unit? Any other things I am missing here? EDIT: Memory label: Samsung CN 0433 PC2100U-25331-A1 M381L3223ETM-CB0 256MB DDR PC2100 CL2.5 ECC

    Read the article

  • Most efficient sorting of calculation on DataTable column calculation

    - by byte
    Lets say you have a DataTable that has columns of "id", "cost", "qty": DataTable dt = new DataTable(); dt.Columns.Add("id", typeof(int)); dt.Columns.Add("cost", typeof(double)); dt.Columns.Add("qty", typeof(int)); And it's keyed on "id": dt.PrimaryKey = new DataColumn[1] { dt.Columns["id"] }; Now what we are interested in is the cost per quantity. So, in other words if you had a row of: id | cost | qty ---------------- 42 | 10.00 | 2 The cost per quantity is 5.00. My question then is, given the preceeding table, assume it's constructed with many thousands of rows, and you're interested in the top 3 cost per quantity rows. The information needed is the id, cost per quantity. You cannot use LINQ. In SQL it would be trivial; how BEST (most efficiently) would you accomplish it in C# without LINQ?

    Read the article

  • Efficient cron job utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • Efficient code to avoid circular references in c# object model

    - by Kumar
    I have an excel like grid where values can be typed referencing other rows To check for circular references when a new value is entered, i traverse the tree and create a list of values referenced thus far, if the current value is found in this list, i return an error thus avoiding a circular reference. This is infrequent enough where extreme performance is not an issue but... Question - is there a better way ? I'm told it's not the most optimal but no answer was provided so on to the experts @ SO :)

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Algorithm: efficient way to remove duplicate integers from an array

    - by ejel
    I got this problem from an interview with Microsoft. Given an array of random integers, write an algorithm in C that removes duplicated numbers and return the unique numbers in the original array. E.g Input: {4, 8, 4, 1, 1, 2, 9} Output: {4, 8, 1, 2, 9, ?, ?} One caveat is that the expected algorithm should not required the array to be sorted first. And when an element has been removed, the following elements must be shifted forward as well. Anyway, value of elements at the tail of the array where elements were shifted forward are negligible. Update: The result must be returned in the original array and helper data structure (e.g. hashtable) should not be used. However, I guess order preservation is not necessary. Update2: For those who wonder why these impractical constraints, this was an interview question and all these constraints are discussed during the thinking process to see how I can come up with different ideas.

    Read the article

  • Most efficient way to store list structure in XML

    - by Mike
    Starting a new project and was planning on storing all of my web content in XML. I do not have access to a database so this seemed like the next best thing. One thing I'm struggling with is how to structure the XML for links (which will later be transformed using XSLT). It needs to be fairly flexible as well. Below is what I started with, but I'm starting to question it. <links> <link> <url>http://google.com</url> <description>Google</description> <link> <link> <url>http://yahoo.com</url> <description>Yahoo</description> <links> <url>http://yahoo.com/search</url> <description>Search</description> </link> <link> </links> That should get transformed into Google Yahoo Search Perhaps something like this might work better. <links> <link href="http://google.com">Google</link> <link href="http://yahoo.com">Yahoo <link href="http://yahoo.com/search">Search</link> </link> </links> Does anyone perhaps have a link that talks about structuring web content properly in XML? Thank you. :)

    Read the article

  • Efficient algorithm to generate all solutions of a linear diophantine equation with ai=1

    - by Ben
    I am trying to generate all the solutions for the following equations for a given H. With H=4 : 1) ALL solutions for x_1 + x_2 + x_3 + x_4 =4 2) ALL solutions for x_1 + x_2 + x_3 = 4 3) ALL solutions for x_1 + x_2 = 4 4) ALL solutions for x_1 =4 For my problem, there are always 4 equations to solve (independently from the others). There are a total of 2^(H-1) solutions. For the previous one, here are the solutions : 1) 1 1 1 1 2) 1 1 2 and 1 2 1 and 2 1 1 3) 1 3 and 3 1 and 2 2 4) 4 Here is an R algorithm which solve the problem. library(gtools) H<-4 solutions<-NULL for(i in seq(H)) { res<-permutations(H-i+1,i,repeats.allowed=T) resum<-apply(res,1,sum) id<-which(resum==H) print(paste("solutions with ",i," variables",sep="")) print(res[id,]) } However, this algorithm makes more calculations than needed. I am sure it is possible to go faster. By that, I mean not generating the permutations for which the sums is H Any idea of a better algorithm for a given H ?

    Read the article

  • Convert in memory POCO objects to c# code to initialize

    - by sidesinger
    Is there a library or code sample for converting an in memory POCO c# object to a .cs code file that creates that object. An example: object of type car in memory becomes: Car c = new Car { Name = "mazda", Id = 5, Passengers = new List<string> { "Bob", "Sally" } // etc... recursing to the bottom }; I could assume it could only set public properties.

    Read the article

  • Which is more Efficient HTML DOM or JQuery

    - by Quasar the space thing
    I am trying to add new Elements in an HTML page body by using document.createElement via Javascript, I am doing this with few if/else case and function callings. All is working fine. Recently I came to know that I can do this with JQuery, too. I have not done too much of coding yet so I was wondering which way is the best in terms of efficiency ? Using native DOM methods or using JQuery to add elements dynamically on the page?

    Read the article

  • C#: What is the best collection class to store very similar string items for efficient serialization

    - by Gregor
    Hi, I would like to store a list of entityIDs of outlook emails to a file. The entityIDs are strings like: "000000005F776F08B736B442BCF7B6A7060B509A64002000" "000000005F776F08B736B442BCF7B6A7060B509A84002000" "000000005F776F08B736B442BCF7B6A7060B509AA4002000" as you can notice, the strings are very similar. I would like to save these strings in a collection class that would be stored as efficiently as possible when I serialize it to a file. Do you know of any collection class that could be used for this? Thank you in advance for any information... Gregor

    Read the article

  • Efficient AutoSuggest with jQuery?

    - by nobosh
    I'm working to build a jQuery AutoSuggest plugin, inspired by Apple's spotlight. Here is the general code: $(document).ready(function() { $('#q').bind('keyup', function() { if( $(this).val().length == 0) { // Hide the q-suggestions box $('#q-suggestions').fadeOut(); } else { // Show the AJAX Spinner $("#q").css("background-image","url(/images/ajax-loader.gif)"); $.ajax({ url: '/search/spotlight/', data: {"q": $(this).val()}, success: function(data) { $('#q-suggestions').fadeIn(); // Show the q-suggestions box $('#q-suggestions').html(data); // Fill the q-suggestions box // Hide the AJAX Spinner $("#q").css("background-image","url(/images/icon-search.gif)"); } }); } }); The issue I want to solve well & elegantly, is not killing the sever. Right now the code above hits the server every time you type a key and does not wait for you to essentially finish typing. What's the best way to solve this? A. Kill previous AJAX request? B. Some type of AJAX caching? C. Adding some type of delay to only submit .AJAX() when the person has stopped typing for 300ms or so?

    Read the article

  • Building an *efficient* if/then interface for non-technical users to build flow-control in PHP

    - by Brendan
    I am currently building an internal tool to be used by our management to control the flow of traffic. I have built an if/then interface allowing the user to set conditions for certain outcomes, however it is inefficient to use the switch statement to control the flow. How can I improve the efficiency of my code? Example of code: if($previous['route_id'] == $condition['route_id'] && $failed == 0) //if we have not moved on to a new set of rules and we haven't failed yet { switch($condition['type']) { case 0 : $type = $user['hour']; break; case 1 : $type = $user['location']['region_abv']; break; case 2 : $type = $user['referrer_domain']; break; case 3 : $type = $user['affiliate']; break; case 4 : $type = $user['location']['country_code']; break; case 5 : $type = $user['location']['city']; break; } $type = strtolower($type); $condition['value'] = strtolower($condition['value']); switch($condition['operator']) { case 0 : if($type == $condition['value']); else $failed = '1'; break; case 1 : if($type != $condition['value']); else $failed = '1'; break; case 2 : if($type > $condition['value']); else $failed = '1'; break; case 3 : if($type >= $condition['value']); else $failed = '1'; break; case 4 : if($type < $condition['value']); else $failed = '1'; break; case 5 : if($type <= $condition['value']); else $failed = '1'; break; } }

    Read the article

  • What is an efficient method for partitioning and aggregating intervals from timestamped rows in a da

    - by mattrepl
    From a data frame with timestamped rows (strptime results), what is the best method for aggregating statistics for intervals? Intervals could be an hour, a day, etc. I've found the aggregate function, but that doesn't help with assigning each row to an interval. I'm planning on adding a column to the data frame that denotes interval and using that with aggregate, but if there's a better solution it'd be great to hear it. Thanks for any pointers!

    Read the article

  • Most efficient way to extract bit flags

    - by Hallik
    I have these possible bit flags. 1, 2, 4, 8, 16, 64, 128, 256, 512, 2048, 4096, 16384, 32768, 65536 So each number is like a true/false statement on the server side. So if the first 3 items, and only the first 3 items are marked "true" on the server side, the web service will return a 7. Or if all 14 items above are true, I would still get a single number back from the web service which is is the sum of all those numbers. What is the best way to handle the number I get back to find out which items are marked as "true"?

    Read the article

  • Datawindow opening error "The memory could not read"

    - by Ambut bhath
    I'm using PowerBuilder version 7.0.3. While opening/Modify Datawindow the datawindow from the library list i got the following error message ["The instruction at "0x104a985c" referenced memory at "0x00000000". The memory could not be "read" Click on OK to termincate the program Click on CANCEL to debug the program] is there any solution for that issue? how to resolve this? Thanks! in advance regards Ambutbhath

    Read the article

  • AVCam memory low warning

    - by Red Nightingale
    This is less a question and more a record of what I've found around the AVCam sample code provided by Apple for iOS4 and 5 camera manipulation. The symptoms of the problem for me were that my app would crash on launching the AVCamViewController after taking around 5-10 photos. I ran the app through the memory leak profiler and there were no apparent leaks but on inspection with the Activity Monitor I discovered that something called mediaserverd was increasing by 17Mb every time the camera was launched and when it reached ~100Mb the app crashed with multiple low memory warnings.

    Read the article

  • Efficient splitting of elements in a field

    - by Gary
    I have a field in a text file exported from a database. The field contains addresses but sometimes they are quite long and the database allows them to contain multiple lines. When exported, the newline character gets replaced with a dollar sign like this: first part of very long address$second part of very long address$third part of very long address Not every address has multiple lines and no address contains more than three lines. The length of each line is variable. I'm massaging the data for import into MS Access which is used for a mailmerge. I want to split the field on the $ sign if it's there but if the field only contains 1 line, I want to set my two extra output fields to a zero length string so that I don't wind up with blank lines in the address when it gets printed. I have an awk file that's working correctly on all the other data in the textfile but I need to get this last bit working. I tried the below code. Aside from the fact that I get a syntax error at the else, I'm not sure this is a good way to do what I want. This is being done with gawk on Windows. BEGIN { FS = "|" } $1 != "HEADER" { if ($6 ~ /\$/) split($6, arr, "$") address = arr[1] addresstwo = arr[2] addressthree = arr[3] addressLength = length(address) addressTwoLength = length(addresstwo) addressThreeLength = length(addressthree) else { address = $6 addressLength = length($6) addresstwo = "" addressTwoLength = length(addresstwo) addressthree = "" addressThreeLength = length(addressthree) } printf("%*s\t%*s\t\%*s\n", addressLength, address, addressTwoLength, addresstwo, addressThreeLength, addressthree) }

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >