Search Results

Search found 58465 results on 2339 pages for 'ephemeral data'.

Page 51/2339 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Core Data + Core Animation/CALayer together??

    - by ivanTheTerrible
    I am making an Cocoa app with custom interfaces. So far I have implemented one version of the app using CALayer doing the rendering, which has been great given the hierarchical structure of CALayers, and its [hitTest:] function for handling mouse events. In this early version, the model of the app are my custom classes. However, as the program grows I feel the urge of using Core Data for the model, not just for the ease of binding/undo management, but also want to try out the new technology. My method so far: In Core Data: creating a Block entity, with attributes xPos, yPos, width, height...etc. Then, creating a BlockView : CALayer class for drawing, which uses methods such as self.position.x = [self valueForKey:@"xPos"] to fetch the values from the model. In this case, every BlockView object has to also keep a local copy of xPos, which is NOT good. Do any of you guys have better suggestions? Edit: This app is a information visualization tool. So the positions, dimensions of the blocks are important, and should be persisted for later analysis.

    Read the article

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Data Mappers, Models and Images

    - by James
    Hi, I've seen and read plenty of blog posts and forum topics talking about and giving examples of Data Mapper / Model implementations in PHP, but I've not seen any that also deal with saving files/images. I'm currently working on a Zend Framework based project and I'm doing some image manipulation in the model (which is being passed a file path), and then I'm leaving it to the mapper to save that file to the appropriate location - is this common practise? But then, how do you deal with creating say 3 different size images from the one passed in? At the moment I have a "setImage($path_to_tmp_name)" which checks the image type, resizes and then saves back to the original filename. A call to "getImagePath()" then returns the current file path which the data mapper can use and then change with a call to "setImagePath($path)" once it's saved it to the appropriate location, say "/content/my_images". Does this sound practical to you? Also, how would you deal with getting the URL to that image? Do you see that as being something that the model should be providing? It seems to me like that model should worry about where the images are being stored or ultimately how they're accessed through a browser and so I'm inclined to put that in the ini file and just pass the URL prefix to the view through the controller. Does that sound reasonable? I'm using GD for image manipulation - not that that's of any relevance. UPDATE: I've been wondering if the image resizing should be done in the model at all. The model could require that it's provided a "main" image and a "thumb" image, both of certain dimensions. I've thought about creating a "getImageSpecs()" function in the model that would return something that defines the required sizes, then a separate image manipulation class could carry out the resizing and (perhaps in the controller?) and just pass the final paths in to the model using something like "setImagePaths($images)". Any thoughts much appreciated :) James.

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Let multiple highcharts charts appear automatically from mysql data

    - by martini1993
    I have the following problem. I want to make multiple Highcharts webcharts appear automatically based on the data from the database. Let's say we have the following database: ___________________________________________________________________ | | | | | | | | Year | Month | ID | Name User | Wins | Losses | |_______|___________|______|_______________|____________|__________| | 2013 1 21 Tony Stark 3 12 | | 2013 1 52 Bruce Wayne 5 4 | | 2013 1 76 Clark Kent 9 5 | |__________________________________________________________________| (This database is an example, there are a lot more rows in the real database.) And i have the following query: SELECT a.year AS year1, a.month AS month1, a.id AS id, a.name AS nameuser, a.wins AS wins, a.losses AS losses FROM Sales a WHERE a.month = 1 AND a.year = YEAR(NOW()) With this, it is very easy to hardcode a chart with Highcharts. But what I want is that there has to be a webchart per user. So instead of a single webchart with all the users in it, I want multiple charts next to each other based on the data from the database. So instead of this: http://jsfiddle.net/CWSb6/ I want this (But then next to each other): http://jsfiddle.net/DReMD/ It has to be generated automatically with php and mysql. So if there is a new user starting this month, and the new user is saved in the database, the page automatically displays the new user with the related web chart. I find this very hard to accomplish and I need some help to get to the right direction for the solution. Many thanks in advance! (Sorry for my bad english.)

    Read the article

  • Modifying existing object attributes in Core Data after the fact

    - by glorifiedHacker
    In a previous question, I was looking for an alternative to modifying how "no date" was being stored in the date attribute of my NSManagedObject subclass. Previously, I had assigned nil to that attribute when a user didn't assign a date. In order to address sorting issues when using NSFetchedResultsController, I have decided to assign [NSDate distantFuture] to the date attribute when a user doesn't assign a date. However, given that this app is already in the wild, I need to update the Core Data store such that any existing nil date values are changed to [NSDate distantFuture]. What is the best way to make this change? The first thing that comes to mind is to iterate through all of the objects in the store in an array and change any nil values that are found. This could be limited to a one-time event by checking against a user defaults key that indicates whether this upgrade has been performed. Is there a way that I can do this with Core Data versioning instead? Or another method that doesn't involve me writing throw-away code?

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Finding Common Byte Sequences in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Prevent two users from editing the same data

    - by Industrial
    Hi everyone, I have seen a feature in different web applications including Wordpress (not sure?) that warns a user if he/she opens an article/post/page/whatever from the database, while someone else is editing the same data simultaneously. I would like to implement the same feature in my own application and I have given this a bit of thought. Is the following example a good practice on how to do this? It goes a little something like this: 1) User A enters a the editing page for the mysterious article X. The database tableEvents is queried to make sure that no one else is editing the same page for the moment, which no one is by then. A token is then randomly being generated and is inserted into a database table called Events. 1) User B also want's to make updates to the article X. Now since our User A already is editing the article, the Events table is queried and looks like this: | timestamp | owner | Origin | token | ------------------------------------------------------------ | 1273226321 | User A | article-x | uniqueid## | 2) The timestamp is being checked. If it's valid and less than say 100 seconds old, a message appears and the user cannot make any changes to the requested article X: Warning: User A is currently working with this article. In the meantime, editing cannot be done. Please do something else with your life. 3) If User A decides to go on and save his changes, the token is posted along with all other data to update the database, and toggles a query to delete the row with token uniqueid##. If he decides to do something else instead of committing his changes, the article X will still be available for editing in 100 seconds for User B Let me know what you think about this approach! Wish everyone a great weekend!

    Read the article

  • Filter large amounts of data in a table w/ jQuery

    - by Bry4n
    I work for a transit agency and I have large amounts of data (mostly times), and I need a way to filter the data using two textboxes (To and From). I found jQuery quick search, but it seems to only work with one textbox. If anyone has any ideas via jQuery or some other client side library, that would be fantastic. Ideal example: To: [Textbox] From:[Textbox] <table> <tr> <td>69th street</td><td>5:00pm</td><td>5:06pm</td><td>5:10pm</td><td>5:20pm</td> </tr> <tr> <td>Millbourne</td><td>5:09pm</td><td>5:15pm</td><td>5:20pm</td><td>5:25pm</td> </tr> <tr> <td>Spring Garden</td><td>6:00pm</td><td>6:15pm</td><td>6:20pm</td><td>6:25pm</td> </tr> </table> So If I start typing in one of the stations in the To: textbox it either displays dynamically like the quick search or i have to press a button (either or) and then in the from: textbox. Lastly it shows me to: station and all its times on the left and the from: station and all its times on the right.

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

  • Parsing Chunk of Data into Hash of Array With Perl

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

  • Filter large amounts of data from a HTML table w/ jQuery

    - by Bry4n
    I work for a transit agency and I have large amounts of data (mostly times), and I need a way to filter the data using two textboxes (To and From). I found jQuery quick search, but it seems to only work with one textbox. If anyone has any ideas via jQuery or some other client side library, that would be fantastic. Ideal example: To: [Textbox] From:[Textbox] <table> <tr> <td>69th street</td><td>5:00pm</td><td>5:06pm</td><td>5:10pm</td><td>5:20pm</td> </tr> <tr> <td>Millbourne</td><td>5:09pm</td><td>5:15pm</td><td>5:20pm</td><td>5:25pm</td> </tr> <tr> <td>Spring Garden</td><td>6:00pm</td><td>6:15pm</td><td>6:20pm</td><td>6:25pm</td> </tr> </table> I have an HTML page with a giant table on it listing the station names and each stations times. I want to be able to put my starting location in one box and my ending location in another box and have all the items in the table disappear that don't relate to either of the two locations typed in, leaving only two rows that match what was typed in (even if they don't spell it right or type it all the way) Similar to the jQuery quick search plugin

    Read the article

  • Good data structure for efficient insert/querying on arbitrary properties

    - by Juliet
    I'm working on a project where Arrays are the default data structure for everything, and every query is a linear search in the form of: Need a customer with a particular name? customer.Find(x => x.Name == name) Need a customer with a particular unique id? customer.Find(x => x.Id == id) Need a customer of a particular type and age? customer.Find(x => x is PreferredCustomer && x.Age >= age) Need a customer of a particular name and age? customer.Find(x => x.Name == name && x.Age == age) In almost all instances, the criteria for lookups is well-defined. For example, we only search for customers by one or more of the properties Id, Type, Name, or Age. We rarely search by anything else. Is a good data structure to support arbitrary queries of these types with lookup better than O(n)? Any out-of-the-box implementations for .NET?

    Read the article

  • Windows 8 Data Binding Bug - OnPropertyChanged Updates Wrong Object

    - by Andrew
    I'm experiencing some really weird behavior with data binding in Windows 8. I have a combobox set up like this: <ComboBox VerticalAlignment="Center" Margin="0,18,0,0" HorizontalAlignment="Right" Height="Auto" Width="138" Background="{StaticResource DarkBackgroundBrush}" BorderThickness="0" ItemsSource="{Binding CurrentForum.SortValues}" SelectedItem="{Binding CurrentForum.CurrentSort, Mode=TwoWay}"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock HorizontalAlignment="Right" Text="{Binding Converter={StaticResource SortValueConverter}}"/> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> Inside of a page with the DataContext set to a statically located ViewModel. When I change that ViewModel's CurrentForm attribute, who's property is implemented like this... public FormViewModel CurrentForm { get { return _currentForm; } set { _currentForm = value; if (!_currentForm.IsLoaded) { _currentSubreddit.Refresh.Execute(null); } RaisePropertyChanged("CurrentForm"); } } ... something really strange happens. The previous FormViewModel's CurrentSort property is changed to the new FormViewModel's current sort property. This happens as the RaisePropertyChanged event is called, through a managed-to-native transition, with native code invoking the setter of CurrentSort of the previous FormViewModel. Does that sound like a bug in Win8's data binding? Am I doing something wrong?

    Read the article

  • core data saving in textboxes in table view controller

    - by user1489709
    I have created a five column (text boxes) cell (row) in table view controller with an option of add button. When a user clicks on add button, a new row (cell) with five column (text boxe) is added in a table view controller with null values. I want that when user fills the text boxes the data should get saved in database or if he changes any data in previous text boxes also it get saved. this is my save btn coding.. -(void)textFieldDidEndEditing:(UITextField *)textField { row1=textField.tag; NSLog(@"Row is %d",row1); path = [NSIndexPath indexPathForRow:row1 inSection:0]; Input_Details *inputDetailsObject1=[self.fetchedResultsController objectAtIndexPath:path]; /* Update the Input Values from the values in the text fields. */ EditingTableViewCell *cell; cell = (EditingTableViewCell *)[self.tableView cellForRowAtIndexPath:[NSIndexPath indexPathForRow:row1 inSection:0]]; inputDetailsObject1.mh_Up= cell.cell_MH_up.text; inputDetailsObject1.mh_down=cell.cell_MH_dn.text; inputDetailsObject1.sewer_No=[NSNumber numberWithInt:[cell.cell_Sewer_No.text intValue]]; inputDetailsObject1.mhup_gl=[NSNumber numberWithFloat:[cell.cell_gl_MHUP.text floatValue]]; inputDetailsObject1.mhdn_gl=[NSNumber numberWithFloat:[cell.cell_gl_MHDN.text floatValue]]; inputDetailsObject1.pop_Ln=[NSNumber numberWithInt:[cell.cell_Line_pop.text intValue]]; inputDetailsObject1.sew_len=[NSNumber numberWithFloat:[cell.cell_Sewer_len.text floatValue]]; [self saveContext]; NSLog(@"Saving the MH_up value %@ is saved in save at index path %d",inputDetailsObject1.mh_Up,row1); [self.tableView reloadData]; }

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • Stopping Filter Display in Dynamic Data Entity Web App

    - by bert
    I'm currently experimenting with the Dynamic Data Entity Web App Project type in VS2008 SP1 and after reading many tutorials which offer helpful advice for problems I so far have no need of a solution to I have fallen at the first hurdle. In the DB I have made my entity model from I decided to start small with a table called "Companies" just to see if I could tweak the display into a satisfactory shape for this small table. The Companies table has a column called "contactid" which leads to a record filled with various contact information in a "contacts" table. The default created Entity Data Model has guessed that One companies could have many contact records. So it tries to be helpful and add a "Contact" filter onto the page that allows you to see all the Companies that share a particular set of contact info indexed by the "Contact Name" field. Unfortunately the contact table is a multi-purpose one that also stores contact info for customers and there are about 1000 times more customers than there are companies. So the Dropdown makes the page load time increase exponentially and produces no benefit. So I'd like to just stop the filter from appearing. Only problem is I don't have a clue how to switch it off. Google is so far proving recalcitrant on the matter so I wondered if anyone in here knew how to get rid of a useless filter.

    Read the article

  • Dynamic Data Extract Tools

    - by Kevin McGovern
    I've been searching around for a few weeks now for a tool that either is fully built or a direction of something I could build for dynamically extracting data via a web interface. Basically, what I'm looking for is a way to give users a list of all available data objects from our database and then let them pick ones from the list they'd like to view and set parameters then export the results to an excel file. Right now we're doing it purely with SQL statements but we have hundreds of objects so as you might imagine, those statements are really complex and prone to errors. It would be great if there was a tool available to do this or if someone had an idea of an easy way to organize this. Any help would be greatly appreciated. We've looked at BI tools like QlikView and Tableau but that is probably overkill for what we're trying to do. The open-source BI tools we've looked at seemed really primitive in their functionality. The other thing we looked at was MSAS (our DB is SQL Server) but I'd prefer something that was more database-agnostic and lived on a web server instead of on the database.

    Read the article

  • Store data in tableview to NSUserDefaults

    - by Jozef Vrana
    Tricks.h file #import "Tricks.h" @implementation Tricks static NSMutableArray *trickList = nil; +(NSMutableArray *)trickList { if(!trickList){ trickList = [[NSMutableArray alloc]init]; } return trickList; } @end Tricks.m file @interface Tricks : NSObject @property(strong, nonatomic) NSString *trickName; Method for adding objects to array -(IBAction)saveAction:(id)sender { Tricks *trick = [[Tricks alloc]init]; trick.trickName = self.trickLabel.text; [[Tricks trickList]insertObject:trick atIndex:0]; [self.navigationController popViewControllerAnimated:YES]; } In .h file of UITabelview class I am making a reference to tricks class, but I am sure there is error on this line. @property (strong, nonatomic) Tricks *tricks; In cellForRow method I am storing data _trick = [[NSMutableDictionary alloc]initWithObjectsAndKeys:trick,nil]; NSUserDefaults *defaults=[NSUserDefaults standardUserDefaults]; [defaults setObject:_trick forKey:@"numberArray"]; [defaults synchronize]; NSLog(@"%@",_trick); In .m class of UITableview in viewDidLoad I want to retrieve data if([[NSUserDefaults standardUserDefaults] objectForKey:@"numberArray"] != nil) { _tricks = [[NSUserDefaults standardUserDefaults] objectForKey:@"numberArray"]; } Thanks for advices

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • jQuery Templates, Data Link

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Query Templates, Data Link, and Globalization I am sure you must have read Scott Guthrie’s blog post about jQuery support and officially supporting jQuery's templating, data linking and globalization, if not here it is: jQuery Templating Since we are an open source shop and use jQuery and jQuery plugins extensively to say the least, decided to look into the templating a bit and see what data linking is all about. For those not familiar with those terms here is the summary, plenty of material out there on what it is, but here is what in my experience it means: jQuery Templating: A templating engine that allows you to specify a client-side template where you indicate which properties/tags you want dynamically updated. You in a sense specify which parts of the html is dynamic and since it is pluggable you are able to use tools data jQuery data linking and others to let it sync up your template with data. What makes it more powerful is that you can easily work with rows of data, adding and removing rows. Once the template has been generated, which you do dynamically on a client-side event, you then append/inject the resulting template somewhere in your DOM, like for example you would get a JSON object from the database, map it to your template, it populates the template with your data in the indicated places, and then let’s say for example append it to a row in a table. I have not found it that useful for lets say a single record of data since you could easily just get a partial view from the server via an html type ajax call. It really shines when you dynamically add/remove rows from a list in the DOM. I have not found an alternative that meets the functionality of the jQuery template and helps of course that Microsoft officially supports it. In future versions of the jQuery plug-in it may even ship as part of the standard jQuery library and with future versions of Visual Studio. jQuery Data Linking: In short I was fascinated by it initially by how with one line of code I can sync up my JSON object with my form elements. That's where my enthusiasm stopped. It was one-line to let is deal with syncing up your form with your JSON object, but it is not bidirectional as they state and I tried all the work arounds they suggested and none of them work. The problem is that when you update your JSON object it DOES NOT sync it up with your form. In an example, accounts are being edited client side by selecting the account from a list by clicking on the row, it then fetches the entire account JSON object via ajax json-type call and then refreshes the form with the account’s details from the new JSON object. What is the use of syncing up my JSON with the form if I still have to programmatically sync up my new JSON object with each DOM property?! So you may ask: “what is the alternative”? Good question and the same one I was pondering, maybe I can just use it for keeping my from n sync with my JSON object so I can post that JSON object back to the server and update my database. That’s when I discovered Knockout: Knockout It addresses the issues mentioned above and also supports event handling through the observer pattern. Not wanting to go into detail here, Steve Sanderson, the creator of Knockout, has already done a terrific job of that, thanks Steve for a great plug-in! Best of all it integrates perfectly with the jQuery Templating engine as well. I have not found an alternative to this plugin that supports the depth and width of functionality and would recommend it to anyone. The only drawback is the embedded html attributes (data-bind=””) tags that you have to add to the HTML, in my opinion tying your behavior to your HTML, where I like to separate behavior from HTML as well as CSS, so the HTML is purely to define content, not styling or behavior. But there are plusses to this as well and also a nifty work around to this that I will just shortly mention here with an example. Instead of data binding an html tag with knockout event handling like so:  <%=Html.TextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   Do: <%=Html.DataBoundTextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   The html extension above then takes care of the internals and you could then swap Knockout for something else if you want to inside the extension and keep the HTML plugin agnostic. Here is what the extension looks like, you can easily build a whole library to support all kinds of data binding options from this:      public static class HtmlExtensions       {         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextBox(name, value, dic);         }       }   Hope this helps in making a decision when and where to consider jQuery templating, data linking and Knockout.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >