Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 21/151 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Python - Is there a better/efficient way to find a node in tree?

    - by Sej P
    I have a node data structure defined as below and was not sure the find_matching_node method is pythonic or efficient. I am not well versed with generators but think there might be better solution using them. Any ideas? class HierarchyNode(): def __init__(self, nodeId): self.nodeId = nodeId self.children = {} # opted for dictionary to help reduce lookup time def addOrGetChild(self, childNode): return self.children.setdefault(childNode.nodeId,childNode) def find_matching_node(self, node): ''' look for the node in the immediate children of the current node. if not found recursively look for it in the children nodes until gone through all nodes ''' matching_node = self.children.get(node.nodeId) if matching_node: return matching_node else: for child in self.children.itervalues(): matching_node = child.find_matching_node(node) if matching_node: return matching_node return None

    Read the article

  • How to code an efficient blacklist filter function in php?

    - by achairapart
    So, I have three arrays like this: [items] => Array ( [0] => Array ( [id] => someid [title] => sometitle [author] => someauthor ... ) ... ) and also a string with comma separated words to blacklist: $blacklist = "some,words,to,blacklist"; Now I need to match these words with (as they can be one of) id, title, author and show results accordingly. I was thinking of a function like this: $pattern = '('.strtr($blacklist, ",", "|").')'; // should return (some|words|etc) foreach ($items as $item) { if ( !preg_match($pattern,$item['id']) || !preg_match($pattern,$item['title']) || !preg_match($pattern,$item['author']) ) { // show item } } and I wonder if this is the most efficient way to filter the arrays or I should use something with strpos() or filter_var with FILTER_VALIDATE_REGEXP ... Note that this function is repeated per 3 arrays. However, each array will not contain more than 50 items.

    Read the article

  • What is the most efficient functional version of the following imperative code?

    - by justin.r.s.
    I'm learning Scala and I want to know the best way of expressing this imperative pattern using Scala's functional programming capabilities. def f(l: List[Int]): Boolean = { for (e <- l) { if (test(e)) return true } } return false } The best I can come up with is along the lines of: l map { e => test(e) } contains true But this is less efficient since it calls test() on each element, whereas the imperative version stops on the first element that satisfies test(). Is there a more idiomatic functional programming technique I can use to the same effect? The imperative version seems awkward in Scala.

    Read the article

  • .NET: efficient way to produce a string from a Dictionary<K,V> ?

    - by Cheeso
    Suppose I have a Dictionary<String,String>, and I want to produce a string representation of it. The "stone tools" way of doing it would be: private static string DictionaryToString(Dictionary<String,String> hash) { var list = new List<String> (); foreach (var kvp in hash) { list.Add(kvp.Key + ":" + kvp.Value); } var result = String.Join(", ", list.ToArray()); return result; } Is there an efficient way to do this in C# using existing extension methods? I know about the ConvertAll() and ForEach() methods on List, that can be used to eliminate foreach loops. Is there a similar method I can use on Dictionary to iterate through the items and accomplish what I want?

    Read the article

  • Which method of adding items to the ASP.NET Dictionary class is more efficient?

    - by ahmd0
    I'm converting a comma separated list of strings into a dictionary using C# in ASP.NET (by omitting any duplicates): string str = "1,2, 4, 2, 4, item 3,item2, item 3"; //Just a random string for the sake of this example and I was wondering which method is more efficient? 1 - Using try/catch block: Dictionary<string, string> dic = new Dictionary<string, string>(); string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { try { string s2 = s.Trim(); dic.Add(s2, s2); } catch { } } } 2 - Or using ContainsKey() method: string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { string s2 = s.Trim(); if (!dic.ContainsKey(s2)) dic.Add(s2, s2); } }

    Read the article

  • Efficient determination of which strings in an array are substrings of the others?

    - by byte
    In C#, Say you have an array of strings, which contain only characters '0' and '1': string[] input = { "0101", "101", "11", "010101011" }; And you'd like to build a function: public void IdentifySubstrings(string[] input) { ... } That will produce the following: "0101 is a substring of 010101011" "101 is a substring of 0101" "101 is a substring of 010101011" "11 is a substring of 010101011" And you are NOT able to use built-in string functionality (such as String.Substring). How would one efficiently solve this problem? Of course you could plow through it via brute force, but it just feels like there ought to be a way to accomplish it with a tree (since the only values are 0's and 1's, it feels like a binary tree ought to fit somehow). I've read a little bit about things like suffix trees, but I'm uncertain if that's the right path to be going down. Any efficient solutions you can think of?

    Read the article

  • Most efficient way to maintain a 'set' in SQL Server?

    - by SEVEN YEAR LIBERAL ARTS DEGREE
    I have ~2 million rows or so of data, each row with an artificial PK, and two Id fields (so: PK, ID1, ID2). I have a unique constraint (and index) on ID1+ID2. I get two sorts of updates, both with a distinct ID1 per update. 100-1000 rows of all-new data (ID1 is new) 100-1000 rows of largely, but not necessarily completely overlapping data (ID1 already exists, maybe new ID1+ID2 pairs) What's the most efficient way to maintain this 'set'? Here are the options as I see them: Delete all the rows with ID1, insert all the new rows (yikes) Query all the existing rows from the set of new data ID1+ID2, only insert the new rows Insert all the new rows, ignore inserts that trigger unique constraint violations Any thoughts?

    Read the article

  • What's the most efficient way to repeatedly remove leading text using Vim?

    - by John Topley
    What's the most efficient way to remove the text 2010-04-07 14:25:50,773 DEBUG This is a debug log statement - from a log file like the extract below using Vim? 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 9,8 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 1,11 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 5,2 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 8,4 This is what the result should look like: 9,8 1,11 5,2 8,4 Note that on this occasion I'm using gVim on Windows, so please don't suggest any UNIX programs which may be better suited to the task—I have to do it using Vim.

    Read the article

  • In the context of an asp.net website, what's the most efficient way to check whether a User has acce

    - by scaramouch
    I have a webpage that you pass in an id parameter (via a querystring), which it then uses to fetch data from a database. Typically, a user would navigate to this page from another page that lists only those records that the user has access to. However, if they go directly to the page by typing in the URL in the Address Bar, they can effectively view any record they like. Eg. If they were to type something like http://localhost/TestSite/ClientAdmin/ManageLocation.aspx?LocationID=5 into their Address Bar, they can access the database record with the LocationID equal to five - even though they shouldn't have access to it. Now, I could solve this by doing a database check every time the page is loaded to see whether the current user has access to the record they're trying to view. However this doesn't seem very efficient given that in most cases a user won't be trying to access a record that isn't theirs. Does anyone have a better suggestion? Thanks.

    Read the article

  • What is the most efficient way to pass data (list of pairs of [Integer + Double]) between two Google App Engine instances?

    - by ruslan
    What is the most efficient way to pass data (list of pairs of [Integer, Double]) between two Google App Engine instances ? Currently I use Java binary serialization. Frontend servlet receives data from the client in JSON format. I convert it to byte[] using ObjectOutput.writeObject and then send it to backend servlet via HTTP POST. It's not in production yet. Should I just pass client's JSON as it is to backend? It seems more logical. But it's bigger in size. Or should I use Google Protocol Buffers as stated in this benchmark article ? Thank you!!!

    Read the article

  • Why isn't INT more efficient than UNIQUEIDENTIFIER (according to the execution plan)?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation? EDIT: The question is not about which is more efficient, but why is the query execution plan showing both queries as having the same cost?

    Read the article

  • VIM is great for text. Is there, too, an efficient, mouseless way to manipulate files on OS X?

    - by Dokkat
    I am having trouble navigating through files on OS X. That is, creating files, copying, moving, and so on. I am currently using the Finder, but the act of clicking with the mouse is not very efficient. Acessing a deep folder takes a considerable amount of time and you'll have to know it's entire path. When I try to use the command line it is even worse. Going to a folder requires at least typing it's entire path with the 'cd' command; and, when you are there, you don't have full control over it. For example, how would you move 3 specific files to another folder? Some text editors offer a 'fuzzy search' function that allows a very fast form of jumping through files. What is a fast, efficient way to navigate through files on OS X?

    Read the article

  • Alternative, more efficient scraping method for a noncoder, than Google doc's importxml and xpath?

    - by binarybunny
    I've searched throughout the net for a simple solution, but it seems everyone has their own unique method (coding language) of achieving this. I'm only just beginning to learn Linux, and my coding skills are thoroughly lacking (non-existent). I love the simplicity of using importxml and xpath, but copying and pasting values after reaching the spreadsheet limit of 50 is getting old. Now that I've seen the light, I would really just like to know of a simple, yet scalable solution to get more data into more spreadsheets/databases. Before I really start getting my hands dirty, I would love to know some of the ways you guys go about accomplishing this?

    Read the article

  • What's the most efficient method for moving and transcoding HD video from my Tivo to iTunes on OS X

    - by Bryan Schuetz
    I've got a Tivo with some HD recordings on it. I'd like to move those files over to my Mac and add them into iTunes. I'd like the move and transcoding to be as painless as possible, and I'd like to preserve the quality of the original HD recording. I've got a network connection to the Tivo and can move the files over but the real problem seems to be transcoding. I tried using MEncoder to transcode to H.264 but the quality really suffered. I was doing the conversion at 10mbps so I'm not sure why the quality was so bad, lots of artifacting, etc.

    Read the article

  • Most efficient way to use a laptop like a desktop? [closed]

    - by user74757
    When I'm at home (which is the vast majority of the time now in Summer), I rarely use my laptop away from my desk. When it is at the desk, I plug in a monitor through HDMI, a power cable, and a mouse and keyboard that always stay there. I was wondering if anyone had any recommendations as to a good dock or product that does a good job of transforming a laptop into a true "desktop replacement." Ideally, it would allow all the connections to remain in place, with minimal effort to take the laptop in and out of the fixture. Thanks for any suggestions!

    Read the article

  • Does Ubuntu 12.04.1 come with everything I need for using virtual servers and are the tools efficient?

    - by orokusaki
    I noticed that Ubuntu 12.04.1 comes with Xen, OpenStack, KVM and other virtualization-related tools. I have used VMWare in the past. If I was to use Xen for visualization, would I see considerable performance lost, since Xen is run on the host OS? Is it even run on the host OS, or is it like VMWare where it's installed below any Linux OS on the machine (embedded, I guess is the word)? Do you have any recommendations on what sort of set up to use with these built-in tools? I have 2 physical servers, side-by-side. Each will need a VM used for Postgres and a VM used as an app server. One will be a failover for the other.

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • Is a Hyperthreaded CPU more powerful and more efficient than a Dual-core CPU? [closed]

    - by user1811864
    which computer to choose with Pentium processor hello they are getting rid of the old computer equipment in the office and i have to choose the computer to take home i get first choice to pick. -15 inch lcd screen 4 gb of ram core 2 duo dual Core E8400 3.00 GHz dvd writer windows vista/ linux -15 inch crt monitor with 2 gb ram and pentium 4 2 ghz single core HT technology windows xp hardisks both 250 GB my friend is telling me to choose the second one Pentium single core HT because he told me it runs faster becuase of HT technology and cooler and consumes less current electricity so it wont get overheated because it has HT technology so it's high definition for encoding and watching HD movies and HD sound and is like a gaming pc to play internet games. And also he said the dual core 8400 runs at 3 ghz compared to the 2 ghz so it heats very much because of the two extra cores so it takes more current raising electricty bills and is not good for gaming and watching HD movies and internet flash animations and games because of getting heated everytime. And he wants to choose and take the E8400 because he has air conditioning at home so it will be safe from heating. So which one computer should i take is it really faster because of the HT High definition technology and will i be able to play internet flash card games better and watch good HD movies Youtube etc and play all the music and songs.

    Read the article

  • A space-efficient guest filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • What is the most efficient way to scan in thousands of pictures? [closed]

    - by leora
    My parents have a number of really large albums and the pictures are starting to face in a lot of cases so we thought it would be a good idea to scan in all the pictures and move them to online albums. The issue is that the task is daunting given that there are thousands of pictures. Are there any services or ideas on how to scan in albums of pictures that won't take up hundreds of man hours for me?

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • More efficient way to find media item in WMP media library?

    - by RoseOfJericho
    Hello, all. I am messing around with the WMPLib component provided by Windows Media Player 12 (wmp.dll) in VB.NET with .NET Framework 3.5 SP1. I am trying to retrieve a media item from my media library based on its name (assuming there are no duplicate names). At the moment, I'm grabbing the entire media library, and looping through every media item, and quitting the loop when I've found the correct media item. This works well (except for when a media item with that name cannot be found), but I was hoping there was a more efficient way of doing this. Here is my code so far: Public Class WMPTest Private myWMP As WMPLib.IWMPCore Private myMediaCollection As WMPLib.IWMPMediaCollection Private myTrack As WMPLib.IWMPMedia Private allTracks As WMPLib.IWMPPlaylist Public Sub New() ' This call is required by the Windows Form Designer. InitializeComponent() ' Add any initialization after the InitializeComponent() call. myWMP = New WMPLib.WindowsMediaPlayer myMediaCollection = myWMP.mediaCollection allTracks = myMediaCollection.getAll Dim theTrack As WMPLib.IWMPMedia = findTrack("Yellow Submarine") MessageBox.Show(theTrack.name) End Sub Public Function findTrack(ByVal strTrackName As String) As WMPLib.IWMPMedia For i As Integer = 0 To (allTracks.count - 1) If allTracks.Item(i).name = strTrackName Then myTrack = allTracks.Item(i) Exit For End If Next 'myTrack is now the track that we wanted to retrieve Return myTrack End Function End Class So what I really want is a way to optimize findTrack() to do its thing without looping through the entire media library (which could be huge). Anyone have a clue?

    Read the article

  • Advice needed on best and most efficient practices with developing google apps application...

    - by Ali
    Hi guys , I'm getting my feet wet with developing my order management applications for integration with google apps. However there are certain aspects I need to take into consideration prior to proceeding any further. My application is such that it would upload documents to google documents and store contacts in google contacts. It requires such that a single order can have a number of uploaded documents associated with it as well as some contacts associated with it. MY question however is what would be the most efficient way to implement this. I could keep key tables for both contacts and documents which woudl contain just an ID and link to the documents/contacts or their respective identification id on google. Or I could maintain an exact replica of the information on my own database as well as a link to the contact on google. However won't that be too redundant. I don't want my application to be really slow as I'm afraid that everytime I make a call to google docs to retrieve a list of documents or google contacts it would be really slow on my application - or am I getting worried for no reason? Any advice would be most appreciated.

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • What is the fastest, most efficient way to get up to speed on a new technology?

    - by SLC
    My current job involves working with a huge number of technologies, most of which are very niche and unheard of. In some cases I have to write something about the technology, or with the technology, such as some lessons, examples, or tutorials, on behalf of the developer of that technology or someone that is backing it. When I get told to learn about a new technology, my first port of call is to check our internal library, and then look on amazon for a book on the subject. Failing that, or if the project is too small to warrant a purchase, I hit up google and youtube. However the results of randomly googling what I want to learn are hit and miss. Some days, I can find everything I want to know in a series of lessons or videos, and it's no problem. Other times, I can find almost nothing, and I really have to piece together things from sites. The result is that there are various resources out there, videos, interactive lessons, tutorials, books etc. but when I need to learn something fast, I often don't know the best way to go about it. It's not about fun, because I don't always have the luxury of working my way through a 600 page textbook named "A Complete Guide To Technology X", I have to deliver results quickly. One of the examples I'd like to use is ASP.NET MVC 2 which is something I have been told to learn. I grabbed a book on MVC 1 to refresh my knowledge, but googling it does't produce much useful information. I've seen a ton of ScottGu's tutorials on it, but they are mostly feature presentations, and some date back almost a year. The same applies to channel 9 and there are no books out yet on amazon. My question therefore has two parts, the first asks, "Where are the best places to look to get the information needed to learn a new technology?" and the second asks "What is the most efficient way to use such resources to learn a new technology?"

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >