Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 322/361 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Generating authentication header from azure table through objective-c

    - by user923370
    I'm fetching data from iCloud and for that I need to generate a header (azure table storage). I used the code below for that and it is generating the headers. But when I use these headers in my project it is showing "make sure that the value of authorization header is formed correctly including the signature." I googled a lot and tried many codes but in vain. Can anyone kindly please help me with where I'm going wrong in this code. -(id)generat{ NSString *messageToSign = [NSString stringWithFormat:@"%@/%@/%@", dateString,AZURE_ACCOUNT_NAME, tableName]; NSString *key = @"asasasasasasasasasasasasasasasasasasasasas=="; const char *cKey = [key cStringUsingEncoding:NSUTF8StringEncoding]; const char *cData = [messageToSign cStringUsingEncoding:NSUTF8StringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *hash = [Base64 encode:HMAC]; NSLog(@"Encoded hash: %@", hash); NSURL *url=[NSURL URLWithString: @"http://my url"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request addValue:[NSString stringWithFormat:@"SharedKeyLite %@:%@",AZURE_ACCOUNT_NAME, hash] forHTTPHeaderField:@"Authorization"]; [request addValue:dateString forHTTPHeaderField:@"x-ms-date"]; [request addValue:@"application/atom+xml, application/xml"forHTTPHeaderField:@"Accept"]; [request addValue:@"UTF-8" forHTTPHeaderField:@"Accept-Charset"]; NSLog(@"Headers: %@", [request allHTTPHeaderFields]); NSLog(@"URL: %@", [[request URL] absoluteString]); return request; } -(NSString*)rfc1123String:(NSDate *)date { static NSDateFormatter *df = nil; if(df == nil) { df = [[NSDateFormatter alloc] init]; df.locale = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease]; df.timeZone = [NSTimeZone timeZoneWithAbbreviation:@"GMT"]; df.dateFormat = @"EEE',' dd MMM yyyy HH':'mm':'ss 'GMT'"; } return [df stringFromDate:date]; }

    Read the article

  • Replacing words in string

    - by abkai
    Okay, so I have the following little function: def swap(inp): inp = inp.split() out = "" for item in inp: ind = inp.index(item) item = item.replace("i am", "you are") item = item.replace("you are", "I am") item = item.replace("i'm", "you're") item = item.replace("you're", "I'm") item = item.replace("my", "your") item = item.replace("your", "my") item = item.replace("you", "I") item = item.replace("my", "your") item = item.replace("i", "you") inp[ind] = item for item in inp: ind = inp.index(item) item = item + " " inp[ind] = item return out.join(inp) Which, while it's not particularly efficient gets the job done for shorter sentences. Basically, all it does is swaps pronoun etc. perspectives. This is fine when I throw a string like "I love you" at it, it returns "you love me" but when I throw something like: you love your version of my couch because I love you, and you're a couch-lover. I get: I love your versyouon of your couch because I love I, and I'm a couch-lover. I'm confused as to why this is happening. I explicitly split the string into a list to avoid this. Why would it be able to detect it as being a part of a list item, rather than just an exact match? Also, slightly deviating to avoid having to post another question so similar; if a solution to this breaks this function, what will happen to commas, full stops, other punctuation? It made some very surprising mistakes. My expected output is: I love my version of your couch because you love I, and I'm a couch-lover. The reason I formatted it like this, is because I eventually hope to be able to replace the item.replace(x, y) variables with words in a database.

    Read the article

  • Help ! How do I get the total number rows from my mssql paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my MSSQL database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my datalist. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a partcular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my datalist pager.Now I need the total number of all comments(not in the cuurent page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesnt work and raises an error.I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Rand(); with exclusion to and already randomly generated number..?

    - by Stefan
    Hey, I have a function which calls a users associated users from a table. The function then uses the rand(); function to chose from the array 5 randomly selected userID's however!... In the case where a user doesnt have many associated users but above the min (if below the 5 it just returns the array as it is) then it gives bad results due to repeat rand numbers... How can overcome this or exclude a previously selected rand number from the next rand(); function call. Here is the section of code doing the work. Bare in mind this must be highly efficient as this script is used everywhere. $size = sizeof($users)-1; $nusers[0] = $users[rand(0,$size)]; $nusers[1] = $users[rand(0,$size)]; $nusers[2] = $users[rand(0,$size)]; $nusers[3] = $users[rand(0,$size)]; $nusers[4] = $users[rand(0,$size)]; return $nusers; Thanks in advance! Stefan

    Read the article

  • How to implement Administrator rights in Java Application?

    - by Yatendra Goel
    I am developing a Data Modeling Software that is implemented in Java. This application converts the textual data (stored in a database) to graphical form so that users can interpret the data in a more efficient form. Now, this application will be accessed by 3 kinds of persons: 1. Managers (who can fill the database with data and they can also view the visual form of the data after entering the data into the database) 2. Viewers (who can only view the visual form of data that has been filled by managers) 3. Administrators (who can create and manage other administrators, managers and viewers) Now, how to implement 3 diff. views of the same application. Note: Managers, Viewers and Administrators can be located in any part of the world and should access the application through internet. One idea that came in my mind is as follows: Step1: Code all the business logic in EJBs so that it can be used in distributed environment (means which can be accessed by several users through internet) Step2: Code 3 Swing GUI Clients: One for administrators, one for managers and one for viewers. These 3 GUI clients can access business logic written in EJBs. Step3: Distribute the clients corresponding to their users. For instance, manager client to managers. =================================QUESTIONS======================================= Q1. Is the above approach is correct? Q2. This is very common functionality that various softwares have. So, Do they implement this kind of functionality through this way or any other way? Q3. If any other approach would be more better, then what is that approach?

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • Reading a binary file in perl: Bad File Descriptor

    - by Magicked
    I'm trying to read a binary file 40 bytes at a time, then check to see if all those bytes are 0x00, and if so ignore them. If not, it will write them back out to another file (basically just cutting out large blocks of null bytes). This may not be the most efficient way to do this, but I'm not worried about that. However, right now I'm getting a "Bad File Descriptor" error and I cannot figure out why. my $comp = "\x00" * 40; my $byte_count = 0; my $infile = "/home/magicked/image1"; my $outfile = "/home/magicked/image1_short"; open IN, "<$infile"; open OUT, ">$outfile"; binmode IN; binmode OUT; my ($buf, $data, $n); while (read (IN, $buf, 40)) { ### Problem is here ### $boo = 1; for ($i = 0; $i < 40; $i++) { if ($comp[$i] != $buf[$i]) { $i = 40; print OUT $buf; $byte_count += 40; } } } die "Problems! $!\n" if $!; close OUT; close IN; I marked with a comment where it is breaking. Thanks for any help!

    Read the article

  • checking mongo database for data

    - by user1647484
    I'm playing around with this tutorial that uses Sinatra, backbone.js, and mongodb for the database. It's my first time using mongo. As far as I understand it the app uses both local storage and a database. it has these routes for the database. For example, it has these routes get '/api/:thing' do DB.collection(params[:thing]).find.to_a.map{|t| from_bson_id(t)}.to_json end get '/api/:thing/:id' do from_bson_id(DB.collection(params[:thing]).find_one(to_bson_id(params[:id]))).to_json end post '/api/:thing' do oid = DB.collection(params[:thing]).insert(JSON.parse(request.body.read.to_s)) "{\"_id\": \"#{oid.to_s}\"}" end After turning the server off and then on, I could see in the server getting data from the database routes 127.0.0.1 - - [17/Sep/2012 08:21:58] "GET /api/todos HTTP/1.1" 200 430 0.0033 My question is, how can I check from within the mongo shell whether the data's in the database? I started the mongo shell ./bin/mongo I selected the database 'use mydb' and then looking at the docs (http://www.mongodb.org/display/DOCS/Tutorial) I tried commands such as > var cursor = db.things.find(); > while (cursor.hasNext()) printjson(cursor.next()); but they didn't return anything.

    Read the article

  • MVVM and avoiding Monolithic God object

    - by bufferz
    I am in the completion stage of a large project that has several large components: image acquisition, image processing, data storage, factory I/O (automation project) and several others. Each of these components is reasonably independent, but for the project to run as a whole I need at least one instance of each component. Each component also has a ViewModel and View (WPF) for monitoring status and changing things. My question is the safest, most efficient, and most maintainable method of instantiating all of these objects, subscribing one class to an Event in another, and having a common ViewModel and View for all of this. Would it best if I have a class called God that has a private instance of all of these objects? I've done this in the past and regretted it. Or would it be better if God relied on Singleton instances of these objects to get the ball rolling. Alternatively, should Program.cs (or wherever Main(...) is) instantiate all of these components, and pass them to God as parameters and then let Him (snicker) and His ViewModel deal with the particulars of running this projects. Any other suggestions I would love to hear. Thank you!

    Read the article

  • How to query the SPView object

    - by Hugo Migneron
    I have a SPView object that contains a lot of SPListItem objects (there are many fields in the view). I am only interested in one of these fields. Let's call it specialField Given that view and specialField, I want to know if a value is contained in specialField. Here is a way of doing what I want to do : String specialField = "Special Field"; String specialValue = "value"; SPList list = SPContext.Current.Site.RootWeb.Lists["My List"]; SPView view = list.Views["My View"]; //This is the view I want to query SPQuery query = new SPQuery(); query.Query = view.Query; SPListItemCollection items = list.GetItems(query); foreach(SPListItem item in items) { var value = item[specialField]; if(value != null) && (value.ToString() == specialValue) { //My value is found. This is what I was looking for. //break out of the loop or return } } //My value is not found. However, iterating through each ListItem hardly seems optimal, especially as there might be hundreds of items. This query will be executed often, so I am looking for an efficient way to do this. EDIT I will not always be working with the same view, so my solution cannot be hardcoded (it has to be generic enough that the list, view and specialField can be changed.

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • JavaScript - Efficiently find all elements containing one of a large set of strings

    - by noah
    I have a set of strings and I need to find all all of the occurrences in an HTML document. Where the string occurs is important because I need to handle each case differently: String is all or part of an attribute. e.g., the string is foo: <input value="foo"> - Add class ATTR to the element. String is the full text of an element. e.g., <button>foo</button> - Add class TEXT to the element. String is inline in the text of an element. e.g., <p>I love foo</p> - Wrap the text in a span tag with class TEXT. Also, I need to match the longest string first. e.g., if I have foo and foobar, then <p>I love foobar</p> should become <p>I love <span class="TEXT">foobar</span></p>, not <p>I love <span class="TEXT">foo</span>bar</p>. The inline text is easy enough: Sort the strings descending by length and find and replace each in document.body.innerHTML with <span class="TEXT">$1</span>, although I'm not sure if that is the most efficient way to go. For the attributes, I can do something like this: sortedStrings.each(function(it) { document.body.innerHTML.replace(new RegExp('(\S+?)="[^"]*'+escapeRegExChars(it)+'[^"]*"','g'),function(s,attr) { $('[+attr+'*='+it+']').addClass('ATTR'); }); }); Again, that seems inefficient. Lastly, for the full text elements, a depth first search of the document that compares the innerHTML to each string will work, but for a large number of strings, it seems very inefficient. Any answer that offers performance improvements gets an upvote :)

    Read the article

  • Quickest way to compare a bunch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

  • Sorting arrays in java

    - by user360706
    Write a static method in Java : public static void sortByFour (int[] arr) That receives as a paramater an array full of non-negative numbers (zero or positive) and sorts the array in the following way : In the beginning of the array all the numbers that devide by four without a remainder will appear. After them all the numbers in the array that devide by 4 with a remainder of 1 will appear. After them all the numbers in the array that devide by 4 with a remainder of 2 will appear. In the end of the array all the rest numbers (those who divide by 4 with the remainder 3) will appear. (The order of the numbers in each group doesn't matter) The method must be the most efficient it can. This is what I wrote but unfortunately it doesn't work well... :( public static void swap( int[] arr, int left, int right ) { int temp = arr[left]; arr[left] = arr[right]; arr[right] = temp; } public static void sortByFour( int[] arr ) { int left = 0; int right = ( arr.length - 1 ); int mid = ( arr.length / 2 ); while ( left < right ) { if ( ( arr[left] % 4 ) > ( arr[right] % 4 ) ) { swap( arr, left, right ); right--; } if ( ( arr[left] % 4 ) == ( arr[right] % 4 ) ) left++; else left++; } } Can someone please help me by fixing my code so that it will work well or rewriting it?

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Linq to xml, retreaving generic interface-based list

    - by Rita
    I have an xml document that looks like this <Elements> <Element> <DisplayName /> <Type /> </Element> </Elements> I have an interface, interface IElement { string DisplayName {get;} } and a couple of derived classes: public class AElement: IElement public class BElement: IElement What I want to do, is to write the most efficient query to iterate through the xml and create a list of IElement, containing AElement or BElement, based on the 'Type' property in the xml. So far I have this: IEnumerable<AElement> elements = from xmlElement in XElement.Load(path).Elements("Element") where xmlElement.Element("type").Value == "AElement" select new AElement(xmlElement.Element("DisplayName").Value); return elements.Cast<IElement>().ToList(); But this is only for AElement, is there a way to add BElement in the same query, and also make it generic IEnumerable? Or would I have to run this query once for each derived type?

    Read the article

  • Heroku: Postgres type operator error after migrating DB from MySQL

    - by sevennineteen
    This is a follow-up to a question I'd asked earlier which phrased this as more of a programming problem than a database problem. http://stackoverflow.com/questions/2935985/postgres-error-with-sinatra-haml-datamapper-on-heroku I believe the problem has been isolated to the storage of the ID column in Heroku's Postgres database after running db:push. In short, my app runs properly on my original MySQL database, but throws Postgres errors on Heroku when executing any query on the ID column, which seems to have been stored in Postgres as TEXT even though it is stored as INT in MySQL. My question is why the ID column is being created as INT in Postgres on the data transfer to Heroku, and whether there's any way for me to prevent this. Here's the output from a heroku console session which demonstrates the issue: Ruby console for myapp.heroku.com >> Post.first.title => "Welcome to First!" >> Post.first.title.class => String >> Post.first.id => 1 >> Post.first.id.class => Fixnum >> Post[1] PostgresError: ERROR: operator does not exist: text = integer LINE 1: ...", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Query: SELECT "id", "name", "email", "url", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER BY "id" LIMIT 1 Thanks!

    Read the article

  • How to create a new ID for the new added node?

    - by marknt15
    Hi, I can normally get the ID of the default tree nodes but my problem is onCreate then jsTree will add a new node but it doesn't have an ID. My question is how can I add an ID to the newly created tree node? What I'm thinking to do is adding the ID HTML attribute to the newly created tree node but how? I need to get the ID of all of the nodes because it will serve as a reference for the node's respective div storage. HTML code: <div class="demo" id="demo_1"> <ul> <li id="phtml_1" class="file"><a href="#"><ins>&nbsp;</ins>Root node 1</a></li> <li id="phtml_2" class="file"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> </div> JS code: $("#demo_1").tree({ ui : { theme_name : "apple" }, callback : { onrename : function (NODE, TREE_OBJ) { alert(TREE_OBJ.get_text(NODE)); alert($(NODE).attr('id')); } } }); Cheers, Mark

    Read the article

  • odp.net SQL query retrieve set of rows from two input arrays.

    - by Karl Trumstedt
    I have a table with a primary key consisting of two columns. I want to retrieve a set of rows based on two input arrays, each corresponding to one primary key column. select pkt1.id, pkt1.id2, ... from PrimaryKeyTable pkt1, table(:1) t1, table(:2) t2 where pkt1.id = t1.column_value and pkt1.id2 = t2.column_value I then bind the values with two int[] in odp.net. This returns all different combinations of my resulting rows. So if I am expecting 13 rows I receive 169 rows (13*13). The problem is that each value in t1 and t2 should be linked. Value t1[4] should be used with t2[4] and not all the different values in t2. Using distinct solves my problem, but I'm wondering if my approach is wrong. Anyone have any pointers on how to solve this the best way? One way might be to use a for-loop accessing each index in t1 and t2 sequentially, but I wonder what will be more efficient. Edit: actually distinct won't solve my problem, it just did it based on my input-values (all values in t2 = 0)

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • "Most popular" GROUP BY in LINQ?

    - by tags2k
    Assuming a table of tags like the stackoverflow question tags: TagID (bigint), QuestionID (bigint), Tag (varchar) What is the most efficient way to get the 25 most used tags using LINQ? In SQL, a simple GROUP BY will do: SELECT Tag, COUNT(Tag) FROM Tags GROUP BY Tag I've written some LINQ that works: var groups = from t in DataContext.Tags group t by t.Tag into g select new { Tag = g.Key, Frequency = g.Count() }; return groups.OrderByDescending(g => g.Frequency).Take(25); Like, really? Isn't this mega-verbose? The sad thing is that I'm doing this to save a massive number of queries, as my Tag objects already contain a Frequency property that would otherwise need to check back with the database for every Tag if I actually used the property. So I then parse these anonymous types back into Tag objects: groups.OrderByDescending(g => g.Frequency).Take(25).ToList().ForEach(t => tags.Add(new Tag() { Tag = t.Tag, Frequency = t.Frequency })); I'm a LINQ newbie, and this doesn't seem right. Please show me how it's really done.

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • Does .Net use Device Dependent or Device Independent Bitmaps?

    - by Brian
    When loading an image into memory, does .Net use DDB, DIB, or something else entirely? If possible, please cite your sources. I'm wondering because we currently have a classic ASP application that is using a 3rd party component to load images that is occasionally creating a “Not enough storage is available to process this command.” error. The error is very inconsistent but tends to happen on larger images (not always, but often). After resetting IIS, processing the same file again typically works just fine. After much research I have found that DDBs tend to have this problem when processing large images because they work out of video memory. Considering that we are running on a web server with an integrated video card and limited shared memory, this could certainly be our problem. We are in the early stages of converting our app to .Net and am wondering if using .Net for this might be a viable alternative to our current method which is why I am asking the question. Any advice is welcome :) but out of curiosity if nothing else, I am really hoping for an answer to the question; does .Net use DDB or DIB?

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >