Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 451/2466 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • What sort of data should be sent for mouse-based movement in a multiplayer game?

    - by Daniel
    I'm new to the Multiplayer Rodeo here so please bear with me... I am just getting started and I'm trying to figure out how to deal with movement. I've looked at the question Best way to implement mouse-based movement in MMOG which gives me a pretty good idea, but I'm still struggling with what kind of data should be sent to the server. If a player is on position [x:0, y:0] and I click with the mouse on [x:40, y:40] to start movement, what information should I send to the server? Should I calculate the position based on velocity on client side and just send the expected location? Or should I send current location and velocity and direction? When the server is updating the clients on the players' whereabouts, should the position be sent only, and the clients expected to interpolate/predict movement, or can the direction sent from the client (instead of just coordinates) be used. My concern(or confusion) is regarding the ping/lag frequency of data update and use of a predictive algorithm, as I'd like the movement to be smooth even with a high latency, and prevent ability to cheat(though that's not the top priority).

    Read the article

  • Architecture Question

    - by katie77
    I am writing a rules/eligibility Module. I have 2 sets of data, one is the customer data and the other is the customer products data. Customer data to Customer products data is one to many. Now I have to go through a set of Eligibility rules for each of this Customer product data. For each customer products data, I can say the customer is eligible for that product or decline the eligibility and should move on to the next product record. So in all the rules, I need to have access to customer and customer product data(the particular record that the rules are being executed against). Since all the rules can either approve a product or decline a product, I created an interface with those 2 methods and is implementing the this interface for all the rules. I am passing the Customer data and one product data for all the rules (because rules should be executed on each row of customer product data). An Ideal situation would be having the customer and customer product data available for the rule instead of passing them to each rule. What is the best way of doing this in-terms of architecture?

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • What scalability problems have you solved using a NoSQL data store?

    - by knorv
    NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include: Cassandra (tabular, written in Java, used by Facebook, Twitter, Digg, Rackspace, Mahalo and Reddit) CouchDB (document, written in Erlang, used by Engine Yard and BBC) Dynomite (key-value, written in C++, used by Powerset) HBase (key-value, written in Java, used by Bing) Hypertable (tabular, written in C++, used by Baidu) Kai (key-value, written in Erlang) MemcacheDB (key-value, written in C, used by Reddit) MongoDB (document, written in C++, used by Sourceforge, Github, Electronic Arts and NY Times) Neo4j (graph, written in Java, used by Swedish Universities) Project Voldemort (key-value, written in Java, used by LinkedIn) Redis (key-value, written in C, used by Engine Yard, Github and Craigslist) Riak (key-value, written in Erlang, used by Comcast and Mochi Media) Ringo (key-value, written in Erlang, used by Nokia) Scalaris (key-value, written in Erlang, used by OnScale) ThruDB (document, written in C++, used by JunkDepot.com) Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site)) I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used. Questions: What scalability problems have you used NoSQL data stores to solve? What NoSQL data store did you use? What database did you use before switching to a NoSQL data store? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • Before data is entered, is there a way to make a grouped table graphic placeholder?

    - by Matt Winters
    I have a grouped table with 3 sections, each section with a title. The 1st and 3rd sections always have only 1 row of information so before any user data is entered, I just put some words like "Enter Data Here..." as placeholder text. This text is edited (replaced) by the user with their own actual data. No problem. The 2nd section however will contain several rows of information entered by user and I'd prefer not to enter placeholder data in row 0, having the user Edit the first row of data then Add subsequent rows. If the numberOfRowsInSection is set to 0, the title for the 3rd section comes close to the title for the 2nd section and it looks ugly. The best that I could come with, and I don't know to do it, is to have a fake graphic placeholder on the striped background (between the 2nd and third titles) that looks like a single row in the 2nd section, put "Enter Data Here..." text in the graphic, and then the first row of actual data entered and all subsequent rows will cover it up. Can anyone tell me how to do this or offer a better suggestion. Thanks.

    Read the article

  • C# .Net Serial DataReceived Event response too slow for high-rate data.

    - by Matthew
    Hi, I have set up a SerialDataReceivedEventHandler, with a forms based program in VS2008 express. My serial port is set up as follows: 115200, 8N1 Dtr and Rts enabled ReceivedBytesThreshold = 1 I have a device I am interfacing with over a BlueTooth, USB to Serial. Hyper terminal receives the data just fine at any data rate. The data is sent regularly in 22 byte long packets. This device has an adjustable rate at which data is sent. At low data rates, 10-20Hz, the code below works great, no problems. However, when I increase the data rate past 25Hz, there starts to recieve mulitple packets on one call. What I mean by this is that there should be a event trigger for every incoming packet. With higher output rates, I have tested the buffer size (BytesToRead command) immediatly when the event is called and there are multiple packets in the buffer then. I think that the event fires slowly and by the time it reaches the code, more packes have hit the buffer. One test I do is see how many time the event is trigger per second. At 10Hz, I get 10 event triggers, awesome. At 100Hz, I get something like 40 event triggers, not good. My goal for data rate is 100HZ is acceptable, 200Hz preferred, and 300Hz optimum. This should work because even at 300Hz, that is only 52800bps, less than half of the set 115200 baud rate. Anything I am over looking? public Form1() { InitializeComponent(); serialPort1.DataReceived += new SerialDataReceivedEventHandler(serialPort1_DataReceived); } private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e) { this.Invoke(new EventHandler(Display_Results)); } private void Display_Results(object s, EventArgs e) { serialPort1.Read(IMU, 0, serial_Port1.BytesToRead); }

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • How to explain to someone that a data structure should not draw itself, explaining separation of con

    - by leeand00
    I have another programmer who I'm trying to explain why it is that a UI component should not also be a data-structure. For instance say that you get a data-structure that contains a record-set from the "database", and you wish to display that record-set in a UI component within your application. According to this programmer (who will remain nameless, he's young and I'm teaching him...), we should subclass the data-structure into a class that will draw the UI component within our application!!!!!! And thus according to this logic, the record-set should manage the drawing of the UI. **Head Desk*** I know that asking a record-set to draw itself is wrong, because, if you wish to render the same data-structure on more than one type of component on your UI, you are going to have a real mess on your hands; you'll need to extend yet another class for each and every UI component that you render from the base-class of your record-set; I am well aware of the "cleanliness" of the of the MVC pattern (and by that what I really mean is you don't confuse your data (the Model) with your UI (the view) or the actions that take place on the data (the Controller more or less...okay not really the API should really handle that...and the Controller should just make as few calls to it as it can, telling it which view to render)) But it's certainly alot cleaner than using data-structures to render UI components! Is there any other advice I could send his way other than the example above? I understand that when you first learn OOP you go through "a stage" where you where just want to extend everything. Followed by a stage when you think that Design Patterns are the solution every single problem...which isn't entirely correct either...thanks Jeff. Is there a way that I can gently nudge this kid in the right direction? Do you have any more examples that might help explain my point to him?

    Read the article

  • Database not updating after UPDATE SQL statement in ASP.net

    - by Ronnie
    I currently have a problem attepting to update a record within my database. I have a webpage that displays in text boxes a users details, these details are taken from the session upon login. The aim is to update the details when the user overwrites the current text in the text boxes. I have a function that runs when the user clicks the 'Save Details' button and it appears to work, as i have tested for number of rows affected and it outputs 1. However, when checking the database, the record has not been updated and I am unsure as to why. I've have checked the SQL statement that is being processed by displaying it as a label and it looks as so: UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, [promo]=@promo WHERE ([users].[user_id] = 16) The function and other relevant code is: Sub Button1_Click(sender As Object, e As EventArgs) changeDetails(emailBox.text, firstBox.text, lastBox.text, promoBox.text) End Sub Function changeDetails(ByVal email As String, ByVal firstname As String, ByVal lastname As String, ByVal promo As String) As Integer Dim connectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0; Ole DB Services=-4; Data Source=C:\Documents an"& _ "d Settings\Paul Jarratt\My Documents\ticketoffice\datab\ticketoffice.mdb" Dim dbConnection As System.Data.IDbConnection = New System.Data.OleDb.OleDbConnection(connectionString) Dim queryString As String = "UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, "& _ "[promo]=@promo WHERE ([users].[user_id] = " + session.contents.item("ID") + ")" Dim dbCommand As System.Data.IDbCommand = New System.Data.OleDb.OleDbCommand dbCommand.CommandText = queryString dbCommand.Connection = dbConnection Dim dbParam_email As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_email.ParameterName = "@email" dbParam_email.Value = email dbParam_email.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_email) Dim dbParam_firstname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_firstname.ParameterName = "@firstname" dbParam_firstname.Value = firstname dbParam_firstname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_firstname) Dim dbParam_lastname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_lastname.ParameterName = "@lastname" dbParam_lastname.Value = lastname dbParam_lastname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_lastname) Dim dbParam_promo As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_promo.ParameterName = "@promo" dbParam_promo.Value = promo dbParam_promo.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_promo) Dim rowsAffected As Integer = 0 dbConnection.Open Try rowsAffected = dbCommand.ExecuteNonQuery Finally dbConnection.Close End Try labelTest.text = rowsAffected.ToString() if rowsAffected = 1 then labelSuccess.text = "* Your details have been updated and saved" else labelError.text = "* Your details could not be updated" end if End Function Any help would be greatly appreciated.

    Read the article

  • Ruby on Rails 2.3.5: Populating my prod and devel database with data (migration or fixture?)

    - by randombits
    I need to populate my production database app with data in particular tables. This is before anyone ever even touches the application. This data would also be required in development mode as it's required for testing against. Fixtures are normally the way to go for testing data, but what's the "best practice" for Ruby on Rails to ship this data to the live database also upon db creation? ultimately this is a two part question I suppose. 1) What's the best way to load test data into my database for development, this will be roughly 1,000 items. Is it through a migration or through fixtures? The reason this is a different answer than the question below is that in development, there's certain fields in the tables that I'd like to make random. In production, these fields would all start with the same value of 0. 2) What's the best way to bootstrap a production db with live data I need in it, is this also through a migration or fixture? I think the answer is to seed as described here: http://lptf.blogspot.com/2009/09/seed-data-in-rails-234.html but I need a way to seed for development and seed for production. Also, why bother using Fixtures if seeding is available? When does one seed and when does one use fixtures?

    Read the article

  • Adding Data into tables iclueds Foreign Keys but Which New Row created Entity FrameWork ?

    - by programmerist
    Addding Data into kartlar Table (RehberID,KampanyaID,BrimID) is ok. But Which Kart'ID created? i need to learn Which Id created after Adding Data (RehberID,KampanyaID,BrimID) into Kartlar? public static List<Kartlar> SaveKartlar(int RehberID, int KampanyaID, int BrimID, string Notlar) { using (GenSatisModuleEntities genSatisCtx = new GenSatisModuleEntities()) { Kartlar kartlar = new Kartlar(); kartlar.RehberReference.EntityKey = new System.Data.EntityKey("GenSatisModuleEntities.Rehber", "ID", RehberID); kartlar.KampanyaReference.EntityKey = new System.Data.EntityKey("GenSatisModuleEntities.Kampanya", "ID", KampanyaID); kartlar.BirimReference.EntityKey = new System.Data.EntityKey("GenSatisModuleEntities.Birim", "ID", BrimID); kartlar.Notlar = Notlar; genSatisCtx.AddToKartlar(kartlar); genSatisCtx.SaveChanges(); List<Kartlar> kartAddedPatient; kartAddedPatient = (from k in genSatisCtx.Kartlar where k.RehberReference.EntityKey == RehberID && k.KampanyaReference.EntityKey == KampanyaID && k.BirimReference.EntityKey == BrimID select k) return kartAddedPatient ; } } How can i do that? i want to get data from Kartlar which data i added?

    Read the article

  • Update database every 30 minutes once

    - by Ravi
    Hi, I am working on C#.Net Windows application with SQL Server 2005. This project i am using ADO.Net Data-service for database maintenance. I am working on industrial automation domain, here data keep on reading more than 8 hours. After reading data from device based on the trigger i have periodically update data to database. for example start reading data on 9.00AM, trigger firing on 9.50AM. Once trigger fire, last 30 minutes(9.20 AM to 9.50AM) data store into data base. After trigger firing keep on reading data from device and store into data base. 10.00AM trigger going to off at time storing data to database has to be stop. Again trigger firing on 11.00AM. Once trigger fire, last 30 minutes(10.30 AM to 11.00AM) data store into data base. After trigger firing keep on reading data from device and store into data base. After 10.00AM trigger not firing means data keep on store locally. Here i don't know until trigger fire keep on reading data where & how to maintain temporarily , After trigger firing, last 30 minutes data how to bring and store into database. I don't know how to achieve it. It would be great if anyone could suggest any idea. Thanks

    Read the article

  • C#: Using one Data Table in order to fill 2 different comboboxes?

    - by odiseh
    Hi all. I have 2 comboboxes on a form (form1) called combobox1 and combobox2. Each comboboxes should be filled with data stored in 2 different tables in Sql server 2005: table1 and table2 I mean: combobox1 -- table1 combobox2 -- table2 I fill data table with proper data and then bind the comboboxes to it separately. My problem is: after filling 2 combos, both of them have equal data got from table2. This is my code: DataTable tb1 = new DataTable(); //Filling tb1 with data got from table1 combobox1.Items.Clear(); combobox1.DataSource = tb1; combobox1.DisplayMember = "Name"; combobox1.ValueMember = "ID"; combobox1.SelectedIndex = -1; //filling tb1 with data got from table2 combobox2.Items.Clear(); combobox2.DataSource = tb1; combobox2.DisplayMember = "Name"; combobox2.ValueMember = "ID"; combobox2.SelectedIndex = -1; What's wrong? It seems that if I get 2 different data tables (tb1 and tb2) , every thing will be all right. Any suggestions please. Thank you

    Read the article

  • What techniques do you use for emitting data from the server that will solely be used in client side scripts?

    - by chuck
    Hi all, I never found an optimal solution for this problem so I am hoping that some of you out there have a few solutions. Let's say I need to render out a list of checkboxes and each checkbox has a set of additional data that goes with it. This data will be used purely in the context of javascript and jquery. My usual strategy is to render this data in hidden fields that are grouped in the same container as the checkbox. My rendered HTML will look something like this: <div> <input type="checkbox" /> <input type="hidden" class="genreId" /> <input type="hidden" class="titleId" /> </div> My only problem with this is that the data in the hidden fields get posted to the server when the form is submitted. For small amounts of data, this is fine. However, I frequently work with large datasets and a large amount of data is needlessly transferred. UPDATE: Before submitting this post, I just saw that I can add a "DISABLED" attribute to my input element to suppress the submission of data. Is this pretty much the best approach that I can take? Thanks

    Read the article

  • Is there a way to prevent SQL Server silently truncating data in local variables and stored procedure parameters?

    - by Luke Woodward
    I recently encountered an issue while porting an app to SQL Server. It turned out that this issue was caused by a stored procedure parameter being declared too short for the data being passed to it: the parameter was declared as VARCHAR(100) but in one case was being passed more than 100 characters of data. What surprised me was that SQL Server didn't report any errors or warnings -- it just silently truncated the data to 100 characters. The following SQLCMD session demonstrates this: 1 create procedure WhereHasMyDataGone (@data varchar(5)) as 2 begin 3 print 'Your data is ''' + @data + '''.'; 4 end; 5 go 1 exec WhereHasMyDataGone '123456789'; 2 go Your data is '12345'. Local variables also exhibit the same behaviour: 1 declare @s varchar(5) = '123456789'; 2 print @s; 3 go 12345 Is there an option I can enable to have SQL Server report errors (or at least warnings) in such situations? Or should I just declare all local variables and stored procedure parameters as VARCHAR(MAX) or NVARCHAR(MAX)?

    Read the article

  • AngularJS: Using Shared Service(with $resource) to share data between controllers, but how to define callback functions?

    - by shaunlim
    Note: I also posted this question on the AngularJS mailing list here: https://groups.google.com/forum/#!topic/angular/UC8_pZsdn2U Hi All, I'm building my first AngularJS app and am not very familiar with Javascript to begin with so any guidance will be much appreciated :) My App has two controllers, ClientController and CountryController. In CountryController, I'm retrieving a list of countries from a CountryService that uses the $resource object. This works fine, but I want to be able to share the list of countries with the ClientController. After some research, I read that I should use the CountryService to store the data and inject that service into both controllers. This was the code I had before: CountryService: services.factory('CountryService', function($resource) { return $resource('http://localhost:port/restwrapper/client.json', {port: ':8080'}); }); CountryController: //Get list of countries //inherently async query using deferred promise $scope.countries = CountryService.query(function(result){ //preselected first entry as default $scope.selected.country = $scope.countries[0]; }); And after my changes, they look like this: CountryService: services.factory('CountryService', function($resource) { var countryService = {}; var data; var resource = $resource('http://localhost:port/restwrapper/country.json', {port: ':8080'}); var countries = function() { data = resource.query(); return data; } return { getCountries: function() { if(data) { console.log("returning cached data"); return data; } else { console.log("getting countries from server"); return countries(); } } }; }); CountryController: $scope.countries = CountryService.getCountries(function(result){ console.log("i need a callback function here..."); }); The problem is that I used to be able to use the callback function in $resource.query() to preselect a default selection, but now that I've moved the query() call to within my CountryService, I seemed to have lost what. What's the best way to go about solving this problem? Thanks for your help, Shaun

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • How do I de-duplicate a list of nodes in XSLT - and return the last node encountered?

    - by Broam
    I've seen lots of "de-duplicate this xml" questions but everyone wants the first node or the nodes are identical. I have a bit of a bigger puzzle. I have a list of articles in XML, a relevant snippet is shown: <item><key>Article1</key><stamp>100</stamp></item> <item><key>Article1</key><stamp>130</stamp></item> <item><key>Article2</key><stamp>800</stamp></item> <item><key>Article1</key><stamp>180</stamp></item> <item><key>Article3</key><stamp>900</stamp></item> <item><key>Article3</key><stamp>950</stamp></item> <item><key>Article4</key><stamp>990</stamp></item> <item><key>Article5</key><stamp>999</stamp></item> I'd like a list of nodes where the keys are unique and where the last instance is returned, not the first: Stamp (integer) is always increasing for elements of a particular key. Ideally I'd like "largest stamp" but they're always in order so the shortcut is ok. Desired result: (Order doesn't really matter.) <item><key>Article2</key><stamp>800</stamp></item> <item><key>Article1</key><stamp>180</stamp></item> <item><key>Article3</key><stamp>950</stamp></item> <item><key>Article4</key><stamp>990</stamp></item> <item><key>Article5</key><stamp>999</stamp></item> I'm somewhat confused on how to get this list. Any ideas? I'm using the Saxon processor if it matters.

    Read the article

  • Jquery mobile email form with php script

    - by Pablo Marino
    i'm working in a mobile website and i'm having problems with my email form The problem is that when I receive the mail sent through the form, it contains no data. All the data entered by the user is missing. Here is the form code <form action="correo.php" method="post" data-ajax="false"> <div> <p><strong>You can contact us by filling the following form</strong></p> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="first_name">First Name</label> <input id="first_name" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="last_name">Last name</label> <input id="last_name" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="email">Email</label> <input id="email" placeholder="" type="email" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="telephone">Phone</label> <input id="telephone" placeholder="" type="tel" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="age">Age</label> <input id="age" placeholder="" type="number" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="country">Country</label> <input id="country" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup"> <label for="message">Your message</label> <textarea id="message" placeholder="" data-mini="true"></textarea> </fieldset> </div> <input data-inline="true" data-theme="e" value="Submit" data-mini="true" type="submit" /> </form> Here is the PHP code <? /* aqui se incializan variables de PHP */ if (phpversion() >= "4.2.0") { if ( ini_get('register_globals') != 1 ) { $supers = array('_REQUEST', '_ENV', '_SERVER', '_POST', '_GET', '_COOKIE', '_SESSION', '_FILES', '_GLOBALS' ); foreach( $supers as $__s) { if ( (isset($$__s) == true) && (is_array( $$__s ) == true) ) extract( $$__s, EXTR_OVERWRITE ); } unset($supers); } } else { if ( ini_get('register_globals') != 1 ) { $supers = array('HTTP_POST_VARS', 'HTTP_GET_VARS', 'HTTP_COOKIE_VARS', 'GLOBALS', 'HTTP_SESSION_VARS', 'HTTP_SERVER_VARS', 'HTTP_ENV_VARS' ); foreach( $supers as $__s) { if ( (isset($$__s) == true) && (is_array( $$__s ) == true) ) extract( $$__s, EXTR_OVERWRITE ); } unset($supers); } } /* DE AQUI EN ADELANTE PUEDES EDITAR EL ARCHIVO */ /* reclama si no se ha rellenado el campo email en el formulario */ /* aquí se especifica la pagina de respuesta en caso de envío exitoso */ $respuesta="index.html#respuesta"; // la respuesta puede ser otro archivo, en incluso estar en otro servidor /* AQUÍ ESPECIFICAS EL CORREO AL CUAL QUEIRES QUE SE ENVÍEN LOS DATOS DEL FORMULARIO, SI QUIERES ENVIAR LOS DATOS A MÁS DE UN CORREO, LOS PUEDES SEPARAR POR COMAS */ $para ="[email protected];" /* this is not my real email, of course */ /* AQUI ESPECIFICAS EL SUJETO (Asunto) DEL EMAIL */ $sujeto = "Mail de la página - Movil"; /* aquí se construye el encabezado del correo, en futuras versiones del script explicaré mejor esta parte */ $encabezado = "From: $first_name $last_name <$email>"; $encabezado .= "\nReply-To: $email"; $encabezado .= "\nX-Mailer: PHP/" . phpversion(); /* con esto se captura la IP del que envío el mensaje */ $ip=$REMOTE_ADDR; /* las siguientes líneas arman el mensaje */ $mensaje .= "NOMBRE: $first_name\n"; $mensaje .= "APELLIDO: $last_name\n"; $mensaje .= "EMAIL: $email\n"; $mensaje .= "TELEFONO: $telephone\n"; $mensaje .= "EDAD: $age\n"; $mensaje .= "PAIS: $country\n"; $mensaje .= "MENSAJE: $message\n"; /* aqui se intenta enviar el correo, si no se tiene éxito se da un mensaje de error */ if(!mail($para, $sujeto, $mensaje, $encabezado)) { echo "<h1>No se pudo enviar el Mensaje</h1>"; exit(); } else { /* aqui redireccionamos a la pagina de respuesta */ echo "<meta HTTP-EQUIV='refresh' content='1;url=$respuesta'>"; } ?> Any help please? Thanks in advance Pablo

    Read the article

  • How can I remove rows with unique values? As in only keeping rows with duplicate values?

    - by user1456405
    Here's the conundrum, I'm a complete and utter noob when it comes to programming. I understand the basics, but am still learning javascript. I have a spreadsheet of surveys, in which I need to see how particular users have varied over time. As such, I need to disregard all rows with unique values in a particular column. The data looks like this: Response Date Response_ID Account_ID Q.1 10/20/2011 12:03:43 PM 23655956 1168161 8 10/20/2011 03:52:57 PM 23660161 1168152 0 10/21/2011 10:55:54 AM 23672903 1166121 7 10/23/2011 04:28:16 PM 23694471 1144756 9 10/25/2011 06:30:52 AM 23732674 1167449 7 10/25/2011 07:52:28 AM 23734597 1087618 5 I've found a way to do so in VBA, which sucks as I have to use excel, per below: Sub Del_Unique() Application.ScreenUpdating = False Columns("B:B").Insert Shift:=xlToRight Columns("A:A").Copy Destination:=Columns("B:B") i = Application.CountIf(Range("A:A"), "<>") + 50 If i > 65536 Then i = 65536 Do If Application.CountIf(Range("B:B"), Range("A" & i)) = 1 Then Rows(i).Delete End If i = i - 1 Loop Until i = 0 Columns("B:B").Delete Application.ScreenUpdating = True End Sub But that requires mucking about. I'd really like to do it in Google Spreadsheets with a script that won't have to be changed. Closest I can get is retrieving all duplicate user ids from the range, but can't associate that with the row. That code follows: function findDuplicatesInSelection() { var activeRange = SpreadsheetApp.getActiveRange(); var values = activeRange.getValues(); // values that appear at least once var once = {}; // values that appear at least twice var twice = {}; // values that appear at least twice, stored in a pretty fashion! var final = []; for (var i = 0; i < values.length; i++) { var inner = values[i]; for (var j = 0; j < inner.length; j++) { var cell = inner[j]; if (cell == "") continue; if (once.hasOwnProperty(cell)) { if (!twice.hasOwnProperty(cell)) { final.push(cell); } twice[cell] = 1; } else { once[cell] = 1; } } } if (final.length == 0) { Browser.msgBox("No duplicates found"); } else { Browser.msgBox("Duplicates are: " + final); } } Anyhow, sorry if this is the wrong place or format, but half of what I've found so far has been from stack, I thought it was a good place to start. Thanks!

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >