Search Results

Search found 34093 results on 1364 pages for 'database architecture'.

Page 99/1364 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Customers and suppliers database design issue

    - by hectorsq
    I am developing a web application in which I will have customers and suppliers. Initially I thought on using a Customers table and a Suppliers table. Then when I was thinking on bank transactions, I noticed that each transaction needs to refer to a customer or a supplier, so I thought on using a single table named Business in which I will save both customers and suppliers. If I use Customers and Suppliers tables when I want to list the bank transactions I will have to search in both tables to get the company name. If I use a Businesses table I will have to use a business type column, and have the union of possible fields for all businesses types. Any suggestions on the design?

    Read the article

  • Voting Script, Possiblity of Simplifying Database Queries

    - by Sev
    I have a voting script which stores the post_id and the user_id in a table, to determine whether a particular user has already voted on a post and disallow them in the future. To do that, I am doing the following 3 queries. SELECT user_id, post_id from votes_table where postid=? AND user_id=? If that returns no rows, then: UPDATE post_table set votecount = votecount-1 where post_id = ? Then SELECT votecount from post where post_id=? To display the new votecount on the web page Any better way to do this? 3 queries are seriously slowing down the user's voting experience

    Read the article

  • Comma separated values in a database field

    - by John Doe
    I have a products table. Each row in that table corresponds to a single product and it's identified by a unique Id. Now each product can have multiple "codes" associated with that product. For example: Id | Code ---------------------- 0001 | IN,ON,ME,OH 0002 | ON,VI,AC,ZO 0003 | QA,PS,OO,ME What I'm trying to do is create a stored procedure so that I can pass in a codes like "ON,ME" and have it return every product that contains the "ON" or "ME" code. Since the codes are comma separated, I don't know how I can split those and search them. Is this possible using only TSQL? Edit: It's a mission critical table. I don't have the authority to change it.

    Read the article

  • Using Multiple Databases

    - by sergiuoala
    A company is hired by another company for helping in a certain field. So I created the following tables: Companies: id, company name, company address Administrators: (in relation with companies) id, company_id, username, email, password, fullname Then, each company has some workers in it, I store data about workers. Hence, workers has a profession, Agreement Type signed and some other common things. Now, the parent tables and data in it for workers (Agreement Types, Professions, Other Common Things) are going to be the same for each company. Should I create 1 new database for each company? Or store All data into the same database? Thanks.

    Read the article

  • Simple question on database query.

    - by GK
    I have been asked in an interview, To write a sql query which fetches the first three records with highest value on some column from a table. i had written a query which fetched all the records with highest value, but didnt get how exactly i can get only first three records of those. could you help me in this. thanks.

    Read the article

  • When is BIG, big enough for a database?

    - by David ???
    I'm developing a Java application that has performance at its core. I have a list of some 40,000 "final" objects, i.e., I have an initialization input data of 40,000 vectors. This data is unchanged throughout the program's run. I am always preforming lookups against a single ID property to retrieve the proper vectors. Currently I am using a HashMap over a sub-sample of a 1,000 vectors, but I'm not sure it will scale to production. When is BIG, actually big enough for a use of DB? One more thing, an SQLite DB is a viable option as no concurrency is involved, so I guess the "threshold" for db use, is perhaps lower.

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas.

    Read the article

  • mysql database design: threads and replies

    - by ajsie
    in my forum i have threads and replies. one thread has multiple replies. but then, a reply can be a reply of an reply (like google wave). because of that a reply has to have a column "reply_id" so it can point to the parent reply. but then, the "top-level" replies (the replies directly under the thread) will have no parent reply. so how can i fix this? how should the columns be in the reply table (and thread table). at the moment it looks like this: threads: id title body replies: id thread_id (all replies will belong to a thread) reply_id (here lies the problem. the top-level replies wont have a parent reply) body what could a smart design look like to enable reply a reply?

    Read the article

  • How to store opening weekdays in a database

    - by JoaoPedro
    I have a group of checkboxes where the user selects some of the weekdays (the opening days of a store). How can I save the selected days? Should I save something like 0111111 (zero means closed on sunday) on the same field and split the result when reading the data? Or create a field for each day and store 0 or 1 on each (weird)?

    Read the article

  • How to map one class against multiple tables with SQLAlchemy?

    - by tote
    Lets say that I have a database structure with three tables that look like this: items - item_id - item_handle attributes - attribute_id - attribute_name item_attributes - item_attribute_id - item_id - attribute_id - attribute_value I would like to be able to do this in SQLAlchemy: item = Item('item1') item.foo = 'bar' session.add(item) session.commit() item1 = session.query(Item).filter_by(handle='item1').one() print item1.foo # => 'bar' I'm new to SQLAlchemy and I found this in the documentation (http://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-against-multiple-tables): j = join(items, item_attributes, items.c.item_id == item_attributes.c.item_id). \ join(attributes, item_attributes.c.attribute_id == attributes.c.attribute_id) mapper(Item, j, properties={ 'item_id': [items.c.item_id, item_attributes.c.item_id], 'attribute_id': [item_attributes.c.attribute_id, attributes.c.attribute_id], }) It only adds item_id and attribute_id to Item and its not possible to add attributes to Item object. Is what I'm trying to achieve possible with SQLAlchemy? Is there a better way to structure the database to get the same behaviour of "dynamic columns"?

    Read the article

  • How to retrieve items from a database c#

    - by Poppy
    I have 3 tables "pics", "shows", "showpics" I want to be able to edit the table "shows". In order to do this i need to retrive the pictures that the show contains (the pictures are stored in the table "pics") the "showpics" table acts as a link does anyone have any ideas as im completely lost and have no idea where to even start Thanks

    Read the article

  • What kind of database to use in C#

    - by Chris
    I'm writing a program in C# that will need to store a few Data Tables on the user's computer and load them back when he restarts the program: Up to about 10000 records consisting of text and integers. I don't want to use a CSV file, and I had some trouble with SQLite. Are there any other good options to try?

    Read the article

  • What kind of database to use in .NET

    - by Chris
    I'm writing a program in C# that will need to store a few Data Tables on the user's computer and load them back when he restarts the program: Up to about 10000 records consisting of text and integers. I don't want to use a CSV file, and I had some trouble with SQLite. Are there any other good options to try?

    Read the article

  • SQLite - executeUpdate exception not caught when database does not exist? (Java)

    - by giant91
    So I was purposely trying to break my program, and I've succeeded. I deleted the sqlite database the program uses, while the program was running, after I already created the connection. Then I attempted to update the database as seen below. Statement stmt; try { stmt = Foo.con.createStatement(); stmt.executeUpdate("INSERT INTO "+table+" VALUES (\'" + itemToAdd + "\')"); } catch(SQLException e) { System.out.println("Error: " + e.toString()); } The problem is, it didn't catch the exception, and continued to run as if the database was updated successfully. Meanwhile the database didn't even exist at that point since this was after I deleted it. Doesn't it check if the database still exists when updating? Do I have to check the database connection manually, every time I update to ensure that the database wasn't corrupted/deleted? Is this the way it is normally done, or is there a simpler/more robust approach? Thank you.

    Read the article

  • when to introduce an application services tier in an n-tier application

    - by user20358
    I am developing a web based application whose primary objective is to fetch data from the database, display it on the UI, take in user inputs and write them back to the database. The application is not going to be doing any industrial strength algorithm crunching, but will be receiving a very high number of hits at peak times (described below) which will be changing thru the day. The layers are your typical Presentation, Business, Data. The data layer is taken care of by the database server. The business layer will contain the DAL component to access the database server over tcp. The choices I have to separate these layers into tiers are: The presentation and business layers can be either kept on the same tier. The presentation layer on a separate tier by itself and the business layer on a separate tier by itself. In the case of choice 2, the business layer will be accessed by the presentation layer using a WCF service either over http or tcp. I don't see any heavy processing being done on the Business layer, so I am leaning towards option 1 above. I also feel for the same reason, adding a new tier will only introduce the network latency. However, in terms of scalability in case I need to scale up or scale out, which is a better way to go? This application will need to be able to support up to 6 million users an hour. There will be a reasonable amount of data in each user session, storing user's preferences and other details. I will be using page level caching as well.

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • Example: Cross Cutting Concerns of an Application

    A little while ago I was given an opportunity to design and implement a new system that sent data via an HTTP Post method and then processed the results that were returned so that they could be inserted in to a database. My system had eight core concerns that it needed to fulfill. Eight Core Concerns Database Access Data Entities Worker Result Processing Process Flow Manager Email/Notification Error Handling Logging Of these eight, five were actually cross cutting concerns. 5 Cross Cutting Concerns Database Access Data Entities Email/Notification Error Handling Logging These five cross cutting concerns were determined after I created an aspect oriented model to help identity the system components that could be factored out into separate components.  These separated components would then be included in the system so that they could be used by various other components.  These five components allow all of the other components to access the database, store data, send notifications, handle errors, and log all system events.  Thus, these components are used to share unique aspects to the system via their implementation. The use of Aspect oriented architecture greatly helped me define what components I needed to create and what each of those components could do.  It also showed how all of the other aspects depended on each other so that each component did not have to re-implement code that was already created in the existing system.

    Read the article

  • Need some suggestions on my softwares architecture. [Code review]

    - by Sergio Tapia
    I'm making an open source C# library for other developers to use. My key concern is ease of use. This means using intuitive names, intuitive method usage and such. This is the first time I've done something with other people in mind, so I'm really concerned about the quality of the architecture. Plus, I wouldn't mind learning a thing or two. :) I have three classes: Downloader, Parser and Movie I was thinking that it would be best to only expose the Movie class of my library and have Downloader and Parser remain hidden from invocation. Ultimately, I see my library being used like this. using FreeIMDB; public void Test() { var MyMovie = Movie.FindMovie("The Matrix"); //Now MyMovie would have all it's fields set and ready for the big show. } Can you review how I'm planning this, and point out any wrong judgement calls I've made and where I could improve. Remember, my main concern is ease of use. Movie.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; namespace FreeIMDB { public class Movie { public Image Poster { get; set; } public string Title { get; set; } public DateTime ReleaseDate { get; set; } public string Rating { get; set; } public string Director { get; set; } public List<string> Writers { get; set; } public List<string> Genres { get; set; } public string Tagline { get; set; } public string Plot { get; set; } public List<string> Cast { get; set; } public string Runtime { get; set; } public string Country { get; set; } public string Language { get; set; } public Movie FindMovie(string Title) { Movie film = new Movie(); Parser parser = Parser.FromMovieTitle(Title); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } public Movie FindKnownMovie(string ID) { Movie film = new Movie(); Parser parser = Parser.FromMovieID(ID); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } } } Parser.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using HtmlAgilityPack; namespace FreeIMDB { /// <summary> /// Provides a simple, and intuitive way for searching for movies and actors on IMDB. /// </summary> class Parser { private Downloader downloader = new Downloader(); private HtmlDocument Page; #region "Page Loader Events" private Parser() { } public static Parser FromMovieTitle(string MovieTitle) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindMovie(MovieTitle); return newParser; } public static Parser FromActorName(string ActorName) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindActor(ActorName); return newParser; } public static Parser FromMovieID(string MovieID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownMovie(MovieID); return newParser; } public static Parser FromActorID(string ActorID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownActor(ActorID); return newParser; } #endregion #region "Page Parsing Methods" public string Poster() { //Logic to scrape the Poster URL from the Page element of this. return null; } public string Title() { return null; } public DateTime ReleaseDate() { return null; } #endregion } } ----------------------------------------------- Do you guys think I'm heading towards a good path, or am I setting myself up for a world of hurt later on? My original thought was to separate the downloading, the parsing and the actual populating to easily have an extensible library. Imagine if one day the website changed its HTML, I would then only have to modifiy the parsing class without touching the Downloader.cs or Movie.cs class. Thanks for reading and for helping!

    Read the article

  • Is this distributed database server idea feasible?

    - by David
    I often use SQLite for creating simple programs in companies. The database is placed on a file server. This works fine as long as there are not more than about 50 users working towards the database concurrently (though depending on whether it is reads or writes). Once there are more than this, they will notice a slowdown if there are a lot of concurrent writing on the server as lots of time is spent on locks, and there is nothing like a cache as there is no database server. The advantage of not needing a database server is that the time to set up something like a company Wiki or similar can be reduced from several months to just days. It often takes several months because some IT-department needs to order the server and it needs to conform with the company policies and security rules and it needs to be placed on the outsourced server hosting facility, which screws up and places it in the wrong localtion etc. etc. Therefore, I thought of an idea to create a distributed database server. The process would be as follows: A user on a company computer edits something on a Wiki page (which uses this database as its backend), to do this he reads a file on the local harddisk stating the ip-address of the last desktop computer to be a database server. He then tries to contact this computer directly via TCP/IP. If it does not answer, then he will read a file on the file server stating the ip-address of the last desktop computer to be a database server. If this server does not answer either, his own desktop computer will become the database server and register its ip-address in the same file. The SQL update statement can then be executed, and other desktop computers can connect to his directly. The point with this architecture is that, the higher load, the better it will function, as each desktop computer will always know the ip-address of the database server. Also, using this setup, I believe that a database placed on a fileserver could serve hundreds of desktop computers instead of the current 50 or so. I also do not believe that the load on the single desktop computer, which has become database server will ever be noticable, as there will be no hard disk operations on this desktop, only on the file server. Is this idea feasible? Does it already exist? What kind of database could support such an architecture?

    Read the article

  • Using jQuery to Insert a New Database Record

    - by Stephen Walther
    The goal of this blog entry is to explore the easiest way of inserting a new record into a database using jQuery and .NET. I’m going to explore two approaches: using Generic Handlers and using a WCF service (In a future blog entry I’ll take a look at OData and WCF Data Services). Create the ASP.NET Project I’ll start by creating a new empty ASP.NET application with Visual Studio 2010. Select the menu option File, New Project and select the ASP.NET Empty Web Application project template. Setup the Database and Data Model I’ll use my standard MoviesDB.mdf movies database. This database contains one table named Movies that looks like this: I’ll use the ADO.NET Entity Framework to represent my database data: Select the menu option Project, Add New Item and select the ADO.NET Entity Data Model project item. Name the data model MoviesDB.edmx and click the Add button. In the Choose Model Contents step, select Generate from database and click the Next button. In the Choose Your Data Connection step, leave all of the defaults and click the Next button. In the Choose Your Data Objects step, select the Movies table and click the Finish button. Unfortunately, Visual Studio 2010 cannot spell movie correctly :) You need to click on Movy and change the name of the class to Movie. In the Properties window, change the Entity Set Name to Movies. Using a Generic Handler In this section, we’ll use jQuery with an ASP.NET generic handler to insert a new record into the database. A generic handler is similar to an ASP.NET page, but it does not have any of the overhead. It consists of one method named ProcessRequest(). Select the menu option Project, Add New Item and select the Generic Handler project item. Name your new generic handler InsertMovie.ashx and click the Add button. Modify your handler so it looks like Listing 1: Listing 1 – InsertMovie.ashx using System.Web; namespace WebApplication1 { /// <summary> /// Inserts a new movie into the database /// </summary> public class InsertMovie : IHttpHandler { private MoviesDBEntities _dataContext = new MoviesDBEntities(); public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; // Extract form fields var title = context.Request["title"]; var director = context.Request["director"]; // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return success context.Response.Write("success"); } public bool IsReusable { get { return true; } } } } In Listing 1, the ProcessRequest() method is used to retrieve a title and director from form parameters. Next, a new Movie is created with the form values. Finally, the new movie is saved to the database and the string “success” is returned. Using jQuery with the Generic Handler We can call the InsertMovie.ashx generic handler from jQuery by using the standard jQuery post() method. The following HTML page illustrates how you can retrieve form field values and post the values to the generic handler: Listing 2 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input name="title" /> <br /> <label>Director:</label> <input name="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { $.post("InsertMovie.ashx", $("form").serialize(), insertCallback); }); function insertCallback(result) { if (result == "success") { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html>     When you open the page in Listing 2 in a web browser, you get a simple HTML form: Notice that the page in Listing 2 includes the jQuery library. The jQuery library is included with the following SCRIPT tag: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> The jQuery library is included on the Microsoft Ajax CDN so you can always easily include the jQuery library in your applications. You can learn more about the CDN at this website: http://www.asp.net/ajaxLibrary/cdn.ashx When you click the Add Movie button, the jQuery post() method is called to post the form data to the InsertMovie.ashx generic handler. Notice that the form values are serialized into a URL encoded string by calling the jQuery serialize() method. The serialize() method uses the name attribute of form fields and not the id attribute. Notes on this Approach This is a very low-level approach to interacting with .NET through jQuery – but it is simple and it works! And, you don’t need to use any JavaScript libraries in addition to the jQuery library to use this approach. The signature for the jQuery post() callback method looks like this: callback(data, textStatus, XmlHttpRequest) The second parameter, textStatus, returns the HTTP status code from the server. I tried returning different status codes from the generic handler with an eye towards implementing server validation by returning a status code such as 400 Bad Request when validation fails (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html ). I finally figured out that the callback is not invoked when the textStatus has any value other than “success”. Using a WCF Service As an alternative to posting to a generic handler, you can create a WCF service. You create a new WCF service by selecting the menu option Project, Add New Item and selecting the Ajax-enabled WCF Service project item. Name your WCF service InsertMovie.svc and click the Add button. Modify the WCF service so that it looks like Listing 3: Listing 3 – InsertMovie.svc using System.ServiceModel; using System.ServiceModel.Activation; namespace WebApplication1 { [ServiceBehavior(IncludeExceptionDetailInFaults=true)] [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { private MoviesDBEntities _dataContext = new MoviesDBEntities(); [OperationContract] public bool Insert(string title, string director) { // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return movie (with primary key) return true; } } }   The WCF service in Listing 3 uses the Entity Framework to insert a record into the Movies database table. The service always returns the value true. Notice that the service in Listing 3 includes the following attribute: [ServiceBehavior(IncludeExceptionDetailInFaults=true)] You need to include this attribute if you want to get detailed error information back to the client. When you are building an application, you should always include this attribute. When you are ready to release your application, you should remove this attribute for security reasons. Using jQuery with the WCF Service Calling a WCF service from jQuery requires a little more work than calling a generic handler from jQuery. Here are some good blog posts on some of the issues with using jQuery with WCF: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/ http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx http://www.west-wind.com/Weblog/posts/896411.aspx http://www.west-wind.com/weblog/posts/324917.aspx http://professionalaspnet.com/archive/tags/WCF/default.aspx The primary requirement when calling WCF from jQuery is that the request use JSON: The request must include a content-type:application/json header. Any parameters included with the request must be JSON encoded. Unfortunately, jQuery does not include a method for serializing JSON (Although, oddly, jQuery does include a parseJSON() method for deserializing JSON). Therefore, we need to use an additional library to handle the JSON serialization. The page in Listing 4 illustrates how you can call a WCF service from jQuery. Listing 4 – Default2.aspx <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; // JSONify the data data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Insert", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result result = result["d"]; if (result === true) { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html> There are several things to notice about Listing 4. First, notice that the page includes both the jQuery library and Douglas Crockford’s JSON2 library: <script src="Scripts/json2.js" type="text/javascript"></script> You need to include the JSON2 library to serialize the form values into JSON. You can download the JSON2 library from the following location: http://www.json.org/js.html When you click the button to submit the form, the form data is converted into a JavaScript object: // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; Next, the data is serialized into JSON using the JSON2 library: // JSONify the data var data = JSON.stringify(data); Finally, the form data is posted to the WCF service by calling the jQuery ajax() method: // Post it $.ajax({   type: "POST",   contentType: "application/json; charset=utf-8",   url: "MovieService.svc/Insert",   data: data,   dataType: "json",   success: insertCallback }); You can’t use the standard jQuery post() method because you must set the content-type of the request to be application/json. Otherwise, the WCF service will reject the request for security reasons. For details, see the Scott Guthrie blog post: http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx The insertCallback() method is called when the WCF service returns a response. This method looks like this: function insertCallback(result) {   // unwrap result   result = result["d"];   if (result === true) {       alert("Movie added!");   } else {     alert("Could not add movie!");   } } When we called the jQuery ajax() method, we set the dataType to JSON. That causes the jQuery ajax() method to deserialize the response from the WCF service from JSON into a JavaScript object automatically. The following value is passed to the insertCallback method: {"d":true} For security reasons, a WCF service always returns a response with a “d” wrapper. The following line of code removes the “d” wrapper: // unwrap result result = result["d"]; To learn more about the “d” wrapper, I recommend that you read the following blog posts: http://encosia.com/2009/02/10/a-breaking-change-between-versions-of-aspnet-ajax/ http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ Summary In this blog entry, I explored two methods of inserting a database record using jQuery and .NET. First, we created a generic handler and called the handler from jQuery. This is a very low-level approach. However, it is a simple approach that works. Next, we looked at how you can call a WCF service using jQuery. This approach required a little more work because you need to serialize objects into JSON. We used the JSON2 library to perform the serialization. In the next blog post, I want to explore how you can use jQuery with OData and WCF Data Services.

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Survey: Which new database platforms are you adopting?

    Database technologies are always improving, which database platforms will you be using tomorrow? Red Gate wants to stay ahead to make sure you have the tools you need to do awesome work. Help us by completing this short survey. Compare and Sync database schemasWhether creating new databases or updating older ones, SQL Compare means no object gets left behind. It’s the gold standard, and you can try it free.

    Read the article

  • Database platform migration from Windows-32bit to Linux-64bit

    - by [email protected]
    We have a customer which have all they core business database on RAC over Windows OS. Last year they were affected by a virus that destroyed the registry and all their RAC environments were "OUT OF ORDER", the result, thousand people on vacation for a day.They were distrustful about Linux and after came an agreement to migrate their Enterprise Manager from Windows to Linux (OMS and Repository). How we did demonstrate how powerful and easy is RMAN to migrate databases across platforms.Fist of check of target platform is available from sourceSQL> select platform_name from v$db_transportable_platform;PLATFORM_NAME-----------------------------------------------------------Microsoft Windows IA (32-bit)Linux IA (32-bit)HP Tru64 UNIXLinux IA (64-bit)HP Open VMSMicrosoft Windows IA (64-bit)Linux 64-bit for AMDMicrosoft Windows 64-bit for AMDSolaris Operating System (x86)Check database object as directories that can change across platforms, also check external tables.Startup source database in read only modeRun the following RMAN ScriptRMAN> connect target / RMAN> convert database on target platform convert script 'c:/temp/convert_grid.rman'transport script 'c:/TEMP/transporta_grid.sql' new database 'gridbd' format 'c:/temp/gridmydb%U' db_file_name_convert 'C:\oracle\oradata\grid','/oracle/gridbd/data2/data';(Notice tha path change on db_file_name_convert)Move from source to target:PfileNew scriptsexternal table filesbfilesdata filesCheck pfile, and ensure that the paths are OKCreate temporary control file to connect rmanExecute the RMAN scriptRMAN> connect target / RMAN> @/home/oracle/pboixeda/win2lnx.rmanShutdown the instance and remove temporary control filesRecreate controlfile/s, take care about the used paths.Execute the transport script, transporta_grid.sqlDue we were moving from a 32-bit architecture to a 64-bit architecture, there is bug reported in 386990.1 note, we had to recreate OLAP , check the note for more details. Alter or Recreate all necessary objects Launch utlrpAfter this experience with Linux they are on the way to migrate all their RAC from 10gR2 on Windows to 11gR2 Linux 64 bit.Hope it helps

    Read the article

  • Force.com presents Database.com SQL Azure/Amazon RDS unfazed

    - by Sarang
    At the DreamForce 2010 event in San Francisco Force.com unveiled their next big thing in the Fat SaaS portfolio "Database.com".  I am still wondering how would they would've shelled out for that domain name. Now why would a already established SaaS player foray into a key building block like Database? Potentially allowing enterprises to build apps that do not utilize the Force.com stack! One key reason is being seen as the Fat SaaS player with evey trick in the SaaS space under his belt. You want CRM come hither, want a custom development PaaS like solution welcome home (VMForce), want all your apps to talk to a cloud DB and minimize latency by having it reside closer to you cloud apps? You've come to the right place sire! Other is potentially killing foray of smaller DB players like Oracle (Not surprisingly, the Database.com offering is a highly customized and scalable Oracle database) from entering the lucrative SaaS db marketplace. The feature set promised looks great out of the box for someone who likes to visualize cool new architectures. The ground realities are certainly going to be a lot different considering the SOAP/REST style access patterns in lieu of the comfortable old shoe of SQL. Microsoft suffered heavily with SDS (SQL Data Services) offering in early 2009 and had to pull the plug on the product only to reintroduce as a simple SQL Server in the cloud, SQL Windows Azure. Though MSFT is playing cool by providing OData semantics to work with SQL Windows Azure satisfying atleast some needs of the Web-Style to a DB. The other features like Social data models including Profiles, Status updates, feeds seem interesting as well. (Although I beleive social is just one of the aspects of large scale collaborative computing). All these features start "Free" for devs its a good news but the good news stops here. The overall pricing model of $ per Users per Transactions / Month is highly disproportionate compared to Amazon RDS (Based on MySQL) or SQL Windows Azure (Based on MSSQL). Roger Jennigs of Oakleaf did an interesting comparo based on 3, 10, 100, 500 users and it turns out that Database.com going by current understanding is way too expensive for the services on offer. The offering may not impact the decision for DotNet shops mulling their cloud stategy or even some Java/MySQL shops thinking about Amazon RDS, however for enterprises having already invested in other force.com offerings this could be a very important piece in the cloud strategy jigsaw. One which would address a key cloud DB issue of "Latency" for them at least it will help having the DB in the neighborhood. The tooling and "SQL like" access provider drivers (Think ODBC/JDBC) will be available later this year. Progress Software has already announced their JDBC driver stack for Database.com. It remains to be seen how effective the overall solutions proves to be in the longer run but for starts its a important decision towards consolidating Force.com's already strong positioning in the SaaS space. As always contrasting views are welcome! :)

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >