Search Results

Search found 27337 results on 1094 pages for 't sql'.

Page 449/1094 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Contiguous Time Periods

    It is always better, and more efficient, to maintain referential integrity by using constraints rather than triggers. Sometimes it is not at all obvious how to do this, and the history table, and other temporal data tables, presented problems for checking data that were difficult to solve with constraints. Suddenly, Alex Kuznetsov came up with a good solution, and so now history tables can benefit from more effective integrity checking. Joe explains...

    Read the article

  • SQL SERVER Create Primary Key with Specific Name when Creating Table

    It is interesting how sometimes the documentation of simple concepts is not available online. I had received email from one of the reader where he has asked how to create Primary key with a specific name when creating the table itself. He said, he knows the method where he can create the table and then [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Slow DB Performance. Seems to be memory related.

    - by David
    I am seeing a pooorly performing web app with a SQL 2005 backend. The db is on a w2k3 machine with 4GB RAM. When I run perfmon on it I see the following. Page life expectancy is low. Consistently under 300 while the Buffer cache hit ratio is always 99% +. The target server memory is always 1618304 and the total server memory is always a number just below that. So it seems that it isn't grabbing enough of the available memory. I have AWE enabled, with the lock pages right for the SQL service account and have set a maximum of 2.25Gb... but it doesn't go near that. When I restart the SQL service the page life expectancy goes much higher, 1000+, and the total target memory starts at 0 and slowly works its way back up to the previous limit. Then it hits the limit and the page life expectancy goes back down massively to <300. So I'm guessing there is something limiting the amount of memory. Any ideas on what that would be and how I can fix it?

    Read the article

  • SQL SERVER Fastest Way to Restore the Database

    A few days ago, I received following email: “Pinal, We are in an emergency situation. We have a large database of around 80+ GB and its backup is of 50+ GB in size. We need to restore this database ASAP and use it; however, restoring the database takes forever. Do you think a compressed backup [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure Cloud Supports SUSE Linux

    The enterprise level Linux distribution can now run in Windows Azure Virtual Machines. If you're interested in using Microsoft's cloud computing platform and run the open source operating system, Azure now supports OpenSUSE 12.1, CentOS 6.2, Ubunto 12.04 and SUSE Linux Enterprise Server 11 SP2. Windows Azure now provides what Microsoft characterizes as Infrastructure as a Service (IaaS) capabilities, not only for the Linux distributions named above, but for Windows Server 2008 R2 and the Windows Server 2012 Release Candidate. If you're a fan of automation, you'll appreciate the ability to use...

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • SQL SERVER Disabled Index and UpdateStatistics

    When we try to update the statistics, it throws an error as if the clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. Have you ever come across the situation where a conversation never gets over and it continues even [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Performance Analyzer

    Any activity that may impact a statement's execution plan is a candidate for using SPA to investigate the possible consequences - both good and bad. Steve Callan discusses the workflow and provides a working example.

    Read the article

  • Clustering Strings on the basis of Common Substrings

    - by pk188
    I have around 10000+ strings and have to identify and group all the strings which looks similar(I base the similarity on the number of common words between any two give strings). The more number of common words, more similar the strings would be. For instance: How to make another layer from an existing layer Unable to edit data on the network drive Existing layers in the desktop Assistance with network drive In this case, the strings 1 and 3 are similar with common words Existing, Layer and 2 and 4 are similar with common words Network Drive(eliminating stop word) The steps I'm following are: Iterate through the data set Do a row by row comparison Find the common words between the strings Form a cluster where number of common words is greater than or equal to 2(eliminating stop words) If number of common words<2, put the string in a new cluster. Assign the rows either to the existing clusters or form a new one depending upon the common words Continue until all the strings are processed I am implementing the project in C#, and have got till step 3. However, I'm not sure how to proceed with the clustering. I have researched a lot about string clustering but could not find any solution that fits my problem. Your inputs would be highly appreciated.

    Read the article

  • Multiple database accesses or one massive access?

    - by DudeOnRock
    What is a better approach when it comes to performance and optimal resource utilization: accessing a database multiple times through AJAX to only get the exact information needed when it is needed, or performing one access to retrieve an object that holds all information that might be needed, with a high probability that not all is actually needed? I know how to benchmark the actual queries, but I don't know how to test what is best when it comes to database performance when thousands of users are accessing the database simultaneously and how connection pooling comes into play.

    Read the article

  • A Tale of Identifiers

    Identifiers aren't locators, and they aren't pointers or links either. They are a logical concept in a relational database, and, unlike the more traditional methods of accessing data, don't derive from the way that data gets stored. Identifiers uniquely identify members of the set, and it should be possible to validate and verify them. Celko somehow involves watches and taxi cabs to illustrate the point.

    Read the article

  • State Transition Constraints

    Data Validation in a database is a lot more complex than seeing if a string parameter really is an integer. A commercial world is full of complex rules for sequences of procedures, of fixed or variable lifespans, Warranties, commercial offers and bids. All this requires considerable subtlety to prevent bad data getting in, and if it does, locating and fixing the problem. Joe Celko shows how useful a State transition graph can be, and how essential it can become with the time aspect added.

    Read the article

  • SQL SERVER Attach mdf file without ldf fileinDatabase

    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • SQL Server Connectivity Portal

    - by Enrique Lima
    We all love one stop portals :-) Browsing around MSDN I came across this for connectivity, yes, a one stop portal, find info on connecting using a variety of technologies and good guides. http://msdn.microsoft.com/en-us/sqlserver/connectivity.aspx

    Read the article

  • What are good/fast methods to pull data from a database using JavaScript?

    - by Yatrix
    I'm pretty new to web technologies and I am creating a filter control that will have cascading controls. We are doing a lot of this through JavaScript and are debating the best route to take to the database. HTTPHandlers, WebServices and Ajax are all being considered (or a combination of them). We want to be able to handle a million rows in theory, so it has to be scalable to at that. We are going through JavaScript as our page must not do post-backs, if your'e wondering. I'm asking from an architectural standpoint, but will take any useful information. Links, control suggestions - anything you have, I'll happily listen to.

    Read the article

  • Full Text Search Strategy For My Website

    - by Hosea146
    I have a website that allows users to search for items in various categories. Each category is a separate area (page) of my website. For example, some categories might be cars, bikes, books etc. At the moment a user has to search for an item by going to the page (for example, cars) and searching for the car they want. I would like to allow the user to search for anything on my site, from my main home page. At the moment, each page (category) has its own set of tables, and I don't really want to turn Full Text Search on for each table (20+ of them) and search each table individually when a search is done. This is going to be slow and tedious. What I'm thinking of doing is creating a single table that will hold all searchable information for each category of item (when an item is saved in its respective table, I would copy all searchable information over to my 'Search' table). I would then turn Full Text Search on for that table, and search that table. Does this sound reasonable? Is there a better way? I've never used Full Text Search before, so this is new to me.

    Read the article

  • Microsoft Azure Outage

    The beginning of Azure's troubles was documented last Wednesday at approximately 1:45 a.m. GMT when its service management component malfunctioned. The initial message on the outage, posted by Microsoft on its Azure service dashboard, read: We are experiencing an issue with Windows Azure service management. Customers will not be able to carry out service management operations. Microsoft continued with its updates at 5 a.m. GMT, when it assured users that fewer than 3.8 percent of hosted services had been affected by the outage. The company also said it was doing its best to stop the issue ...

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • Designing Efficient SQL: A Visual Approach

    Sometimes, it is a great idea to push away the keyboard when tackling the problems of an ill-performing, complex, query, and take up pencil and paper instead. By drawing a diagram to show of all the tables involved, the joins, the volume of data involved, and the indexes, you'll see more easily the relative efficiency of the possible paths that your query could take through the tables.

    Read the article

  • c# display DB table structure

    - by user3529643
    I have a question. My code is the following : public partial class Form1 : Form { public OleDbConnection datCon; public string MyDataFile; public ArrayList tblArray; public ArrayList fldArray; public Form1() { InitializeComponent(); lvData.Clear(); lvData.View = View.Details; lvData.LabelEdit = false; lvData.FullRowSelect = true; lvData.GridLines = true; } private void DataConnection() { MyDataFile = Application.StartupPath + @"\studenti.mdb"; string MyCon = @"provider=microsoft.jet.oledb.4.0;data source=" + MyDataFile; try { datCon = new OleDbConnection(MyCon); } catch (Exception ex) { MessageBox.Show(ex.Message); } FillTreeView(); } private void GetTables(OleDbConnection cnn) { try { cnn.Open(); DataTable schTable = cnn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, new Object[] { null, null, null, "TABLE" }); tblArray = new ArrayList(); foreach (DataRow datrow in schTable.Rows) { tblArray.Add(datrow["TABLE_NAME"].ToString()); } cnn.Close(); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void GetFields(OleDbConnection cnn, string tabNode) { string tabName; try { tabName = tabNode; cnn.Open(); DataTable schTable = cnn.GetOleDbSchemaTable(OleDbSchemaGuid.Columns, new Object[] { null, null, tabName }); fldArray = new ArrayList(); foreach (DataRow datRow in schTable.Rows) { fldArray.Add(datRow["COLUMN_NAME"].ToString()); } cnn.Close(); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void FillTreeView() { tvData.Nodes.Clear(); tvData.Nodes.Add("Database"); tvData.Nodes[0].Tag = "RootDB"; GetTables(datCon); // add table node for (int i = 0; i < tblArray.Count; i++) { tvData.Nodes[0].Nodes.Add(tblArray[i].ToString()); tvData.Nodes[0].Nodes[i].Tag = "Tables"; } // add field node for (int i = 0; i < tblArray.Count; i++) { GetFields(datCon, tblArray[i].ToString()); for (int j = 0; j < fldArray.Count; j++) { tvData.Nodes[0].Nodes[i].Nodes.Add(fldArray[j].ToString()); tvData.Nodes[0].Nodes[i].Nodes[j].Tag = "Fields"; } } this.tvData.ContextMenuStrip = contextMenuStrip1; contextMenuStrip1.ItemClicked +=contextMenuStrip1_ItemClicked; } public void FillListView(OleDbConnection cnn, string tabName) { OleDbCommand cmdRead; OleDbDataReader datReader; string strField; lblTableName.Text = tabName; strField = "SELECT * FROM [" + tabName + "]"; // Initi cmdRead obiect cmdRead = new OleDbCommand(strField, cnn); cnn.Open(); datReader = cmdRead.ExecuteReader(); // fill ListView while (datReader.Read()) { ListViewItem objListItem = new ListViewItem(datReader.GetValue(0).ToString()); for (int c = 1; c < datReader.FieldCount; c++) { objListItem.SubItems.Add(datReader.GetValue(c).ToString()); } lvData.Items.Add(objListItem); } datReader.Close(); cnn.Close(); } private void ViewToolStripMenuItem_Click(object sender, EventArgs e) { DataConnection(); } public void tvData_AfterExpand(object sender, System.Windows.Forms.TreeViewEventArgs e) { string tabName; int fldCount; if (e.Node.Tag.ToString() == "Tables") { fldCount = e.Node.GetNodeCount(false); //column headers. int n = lvData.Width; double wid = n / fldCount; // width columnn for (int c = 0; c < fldCount; c++) { lvData.Columns.Add(e.Node.Nodes[c].Text, (int)wid, HorizontalAlignment.Left); } // gett table name tabName = e.Node.Text; FillListView(datCon, tabName); } } public void button1_Click(object sender, EventArgs e) { //TO DO?? } } I have a treeview populated with tables (nodes) from my database, and a listview which is populated with the data from my tables when I click on a table. As you can see I have a button1 on my form. When I click it I want it to display to me the structure of the table I selected in my treeview (a treeview node). Not too many details, just the name of the columns in my table, type of columns, primary keys. I've tried to follow many tutorials but I can t seem to manage it.

    Read the article

  • How to use database to generate multiple folder content page? [migrated]

    - by VenomVipes
    Scenario :I am trying to build a Mobile Entertainment Portal. It will enable users to download Music & Movies to their Cell Phones... Problem Exp : Suppose I upload 100 folders of Songs, each folder is for one Album. I want a way to generate a page with all the folders name (Album Name) in it. If user click on the page, they should be taken to a page where they get list of all songs in the album. Clicking on any song name will let them download it. Can it be done anyway or will I have to manually design each of the 3 pages for each album. If I do that, its time consuming and also will be difficult to change anything like footer, header...

    Read the article

  • How to Install Xampp on Windows XP

    To begin, visit the XAMPP for Windows home page, located at: http://www.apachefriends.org/en/xampp-windows.html. You will have several options for which flavor of XAMPP you wish to install, including the Installer, Zip, and 7zip versions. For simplicity's sake, this tutorial will use the simplest method: the installer. Click on the Installer link and you will be redirected to the program's SourceForge page. You may get a pop-up like the one below; if so, click Run: Next, you will be prompted to choose an installation language. Choose English (or whichever language you wish) and click the quo...

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >