Search Results

Search found 33029 results on 1322 pages for 'database queries'.

Page 140/1322 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • SSMS Built in Reports for Server and Database Monitoring

    - by GrumpyOldDBA
    This is a long post which I hope will format correctly – I’ve placed a pdf version for download here  http://www.grumpyolddba.co.uk/sql2008/ssmsreports_grumpyolddba.pdf I sometimes discover that the built in reports for SQL Server within SSMS are an unknown, sometimes this is because not all the right components were installed during the server build, other times is because generally there’s never been great reporting for the DBA from the SQL Team so no-one expects to find anything useful for...(read more)

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • SQL Server 2005: When copy table structure to other database "CONSTRAINT" keywords lost

    - by StreamT
    Snippet of original table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL CONSTRAINT [DF_Batch_CustomerDepositMade] DEFAULT (0) Snippet of copied table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL, Code for copy database: Server server = new Server(SourceSQLServer); Database database = server.Databases[SourceDatabase]; Transfer transfer = new Transfer(database); transfer.CopyAllObjects = true; transfer.CopySchema = true; transfer.CopyData = false; transfer.DropDestinationObjectsFirst = true; transfer.DestinationServer = DestinationSQLServer; transfer.CreateTargetDatabase = true; Database ddatabase = new Database(server, DestinationDatabase); ddatabase.Create(); transfer.DestinationDatabase = DestinationDatabase; transfer.Options.IncludeIfNotExists = true; transfer.TransferData();

    Read the article

  • LinqToSql Select to a class then do more queries

    - by fyjham
    I have a LINQ query running with multiple joins and I want to pass it around as an IQueryable<T> and apply additional filters in other methods. The problem is that I can't work out how to pass around a var data type and keep it strongly typed, and if I try to put it in my own class (EG: .Select((a,b) => new MyClass(a,b))) I get errors when I try to add later Where clauses because my class has no translations into SQL. Is there any way I can do one of the following: Make my class map to SQL? Make the var data-type implement an interface (So I can pass it round as though it's that)? Something I haven't though of that'll solve my issue? Example: public void Main() { using (DBDataContext context = new DBDataContext()) { var result = context.TableAs.Join( context.TableBs, a => a.BID, b => b.ID, (a,b) => new {A = a, B = b} ); result = addNeedValue(result, 4); } } private ???? addNeedValue(???? result, int value) { return result.Where(r => r.A.Value == value); } PS: I know in my example I can flatten out the function easily, but in the real thing it'd be an absolute mess if I tried.

    Read the article

  • How to merge data from two separate access 2007 databases

    - by DiegoMaK
    Hi, I have two identical databases with same structure, database a in computer a and database b in computer b. The data of database a*(a.accdb)* and database b*(b.accdb)* are different. then in database a i have for example ID:1, 2, 3 and in database B i Have ID:4,5,6 Then i need merge these databases data in only one database(a or b, doesn't matter) so the final database looks like. ID:1,2,3,4,5,6 I search an easy way to do this. because i have many tables. and do this by union query is so tedious. I search for example for a backup option for only data without scheme as in postgreSQl or many others RDBMS, but i don't see this options in access 2007. pd:only just table could be have duplicate values(I guess that pk doesn't allow copy a duplicate value and all others values will be copied well). if i wrong please correct me. thanks for your help.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • Database and query to store and retreive friend list [migrated]

    - by amr Kamboj
    I am developing a module in website to save and retreive friend list. I am using Zend Framework and for DB handling I am using Doctrine(ORM). There are two models: 1) users that stores all the users 2) my_friends that stores the friend list (that is refference table with M:M relation of user) the structure of my_friends is following ...id..........user_id............friend_id........approved.... ...10.........20 ..................25...................1.......... ...10.........21 ..................25...................1.......... ...10.........22 ..................30...................1.......... ...10.........25 ..................30...................1.......... The Doctrine query to retreive friend list id follwing $friends = Doctrine_Query::create()->from('my_friends as mf') ->leftJoin('mf.users as friend') ->where("mf.user_id = 25") ->andWhere("mf.approved = 1"); Suppose I am viewing the user no.- 25. With this query I am only getting the user no.- 30. where as user no.- 25 is also approved friend of user no.- 20 and 21. Please guide me, what should be the query to find all friend and is there any need to change the DB structure.

    Read the article

  • Browser Game Database structure

    - by John Svensson
    users id username password email userlevel characters id userid level strength exp max_exp map id x y This is what I have so far. I want to be able to implement and put different NPC's on my map location. I am thinking of some npc_entities table, would that be a good approach? And then I would have a npc_list table with details as how much damage, level, etc the NPC is. Give me some ideas with the map, map entities, npc how I can structure it?

    Read the article

  • Restoring Sharepoint content database

    - by jude
    Hi, My WSS_Content database had got corrupt. And my pc was infected by virus. I had no backup of my WSS_Content database. So, I copied the corrupt database to a separete disk, formatted and reinstalled Sharepoint, with SQL Server 2005 as before (I'm using sharepoint 2007 ). I used Sytools Sharepoint Recovery tool, that i found on the net, which helped me restore my corrupt WSS_Content database. Now i want to set this content database as my "The content database" for my newly installed sharepoint. I tried the steps that i found in the link :- http://www.stationcomputing.com/scblogspace/Lists/Posts/Post.aspx?ID=40 I get stuck at step 3. Can anybody help me. I am really in a big mess. Would appreciate any help. Thanks, Jude Aloysius

    Read the article

  • Most efficient way to update a MySQL Database on a Linux host with that of an ASP.Net Form on Window

    - by NJTechGuy
    My kind webhost (1and1) royally asked me to go elsewhere to do something like this. I have 2 sites. One of them was developed by a .Net programmer. Now I am contracted to implement a PHP site and fetch data from the .Net site. There is an ASP.Net form that a customer fills and when they hit submit, the data gets stored in SQL Server DB. How do I also store the same data in MySQL parallelly? I cannot directly use some database connectors with ASP.Net since MySQL connectivity is not supported on 1and1 Windows hosting (biz account, no less!). What I thought of is to publish an RSS feed of entries in ASP.Net site and routinely scrape that data into MySQL on Linux host. It is an overkill, I know. Not efficient. I thought I would pick the best brains on SOF to get a different, efficient opinion. Thanks in advance guys...

    Read the article

  • My Lucene queries only ever find one hit

    - by Bob
    I'm getting started with Lucene.Net (stuck on version 2.3.1). I add sample documents with this: Dim indexWriter = New IndexWriter(indexDir, New Standard.StandardAnalyzer(), True) Dim doc = Document() doc.Add(New Field("Title", "foo", Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.NO)) doc.Add(New Field("Date", DateTime.UtcNow.ToString, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.NO)) indexWriter.AddDocument(doc) indexWriter.Close() I search for documents matching "foo" with this: Dim searcher = New IndexSearcher(indexDir) Dim parser = New QueryParser("Title", New StandardAnalyzer()) Dim Query = parser.Parse("foo") Dim hits = searcher.Search(Query) Console.WriteLine("Number of hits = " + hits.Length.ToString) No matter how many times I run this, I only ever get one result. Any ideas?

    Read the article

  • Appengine (python) returns empty for valid queries

    - by Grant
    I've got an app with around half a million 'records', each of which only stores three fields. I'd like to look up records by a string field with a query, but I'm running into problems. If I visit the console page, manually view a record and save it (without making changes) it shows up in a query: SELECT * FROM wordEntry WHERE wordStr = 'SomeString' If I don't do this, I get 'no results'. Does appengine need time to update? If so, how much? (I was also having trouble batch deleting and modifying data, but I was able to break the problem up into smaller chunks.)

    Read the article

  • mysql insert data from multiple select queries

    - by daulex
    What I've got working and it's what I need to improve on: INSERT form_data (id,data_id, email) SELECT fk_form_joiner_id AS data_id, value AS email FROM wp_contactform_submit_data WHERE form_key='your-email' This just gets the emails, now this is great, but not enough as I have a good few different values of form_key that I need to import into different columns, I'm aware that I can do it via php using foreach loops and updates, but this needs to be done purely in mysql. So how do I do something like: insert form_data(id,data,email,name,surname,etc) Select [..],Select [..].... Please help

    Read the article

  • Generated queries contain schema and catalog name

    - by stacker
    I've the same problem as described here In the generated SQL Informix expects catalog:schema.table but what's actually generated is catalog.schema.table which leads to a syntax error. Setting: hibernate.default_catalog= hibernate.default_schema= had no effect. I even removed schema and catalog from the table annotation, this caused a different issues : the query looked like that ..table same for setting catalog and schema to an empty string. Versions seam 2.1.2 Hibernate Annotations 3.3.1.GA.CP01 Hibernate 3.2.4.sp1.cp08 Hibernate EntityManager 3.3.2.GAhibernate Jboss 4.3 (similar to 4.2.3)

    Read the article

  • Nested sql queries in rails when :has_and_belongst_to_many

    - by Godisemo
    Hello, In my application I the next task that has not already been done by a user. I have Three models, A Book that has many Tasks and then I have a User that has has and belongs to many tasks. The table tasks_users table contains all completed tasks so I need to write a complex query to find the next task to perform. I have came up with two solutions in pure SQL that works, but I cant translate them to rails, thats what I need help with SELECT * FROM `tasks` WHERE `tasks`.`book_id` = @book_id AND `tasks`.`id` NOT IN ( SELECT `tasks_users`.`task_id` FROM `tasks_users` WHERE `tasks_users`.`user_id` = @user_id) ORDER BY `task`.`date` ASC LIMIT 1; and equally without nested select SELECT * FROM tasks LEFT JOIN tasks_users ON tasks_users.tasks_id = task.id AND tasks_users.user_id = @user_id WHERE tasks_users.task_id IS NULL AND tasks.book_id = @book_id LIMIT 1; This is what I Have done in rails with the MetaWhere plugin book.tasks.joins(:users.outer).where(:users => {:id => nil}) but I cant figure out how to get the current user there too, Thanks for any help!

    Read the article

  • Data base preference for network based C# windows application [on hold]

    - by Sinoop Joy
    I'm planning to develop a C# widows based application for an academy. The academy will have different instances of application running in different machines. The database should have shared access. All the application instances can do update, delete or insert. I've not done any network based application. Anybody can give any useful link to where to start with ? Which database would give max performance with all required features i said for this scenario ?

    Read the article

  • SQL - Outer Join 2 queries?

    - by Stuav
    I have two querys. Query 1 gives me this result: Day New_Users 01-Jan-12 45 02-Jan-12 36 and so on. Query 2 gives me this result: Day Retained_Users 01-Jan-12 33 02-Jan-12 30 and so on. I want a new query that will join this together and read: Day New_Users Retained_Users 01-Jan-12 45 33 02-Jan-12 36 30 Do I use some sort of outer join?

    Read the article

  • Why does SQLite not bring back any results from my database

    - by tigermain
    This is my first SQLite based iPhone app and I am trying to get it to read a menu hierarchy from my database. The database appears to be registered fine as the compiled statement doesnt error (tried putting in valid table name to test) but for some reason sqlite3_step(compiledStmt) doesnt ever equal SQLITE_ROW as if to suggest there is no data in there; which there is. sqlite3 *database; menu = [[NSMutableArray alloc] init]; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char *sqlStmt = "SELECT * FROM Menu"; sqlite3_stmt *compiledStmt; if (sqlite3_prepare_v2(database, sqlStmt, -1, &compiledStmt, NULL) == SQLITE_OK) { while (sqlite3_step(compiledStmt) == SQLITE_ROW) { NSString *aTitle = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStmt, 1)]; MenuItem *menuItem = [[MenuItem alloc] init]; menuItem.title = aTitle; [menu addObject:menuItem]; [menuItem release]; } } else { NSLog(@"There is an error with the SQL Statement"); } sqlite3_finalize(compiledStmt); } sqlite3_close(database);

    Read the article

  • On Demand Webinar: Extreme Database Performance meets its Backup and Recovery Match

    - by Cinzia Mascanzoni
    Oracle’s Sun ZFS Backup Appliance is a tested, validated and supported backup appliance specifically tuned for Oracle engineered system backup and recovery. The Sun ZFS Backup Appliance is easily integrated with Oracle engineered systems and provides an integrated high-performance backup solution that reduces backup windows by up to 7x and recovery time by up to 4x compared to competitor engineered systems backup solutions. Invite partners to register to attend this webcast to learn how the Sun ZFS Backup Appliance can provide superior performance, cost effectiveness, simplified management and reduced risk.

    Read the article

  • Oracle Exadata???????????????????

    - by takashi.hitomi
    2010?6????????????????????Oracle Exadata??????????????! ???????Oracle Exadata?????? ?????????2010 ???????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????? Oracle?Smart Grid????????Oracle Exadata??????????????????????????? ?21? ??·?????????? ????????????????? ????????????????????????????????????????????????????????????·?·????????????????????????????????????????????? ?????????????????????????????????????????????????????? ????????Oracle Exadata?????????????Oracle Database????????·?????????????? ?????????Update?Get?? Oracle Cloud Computing Summit ~ Database & Exadata Day ~ Oracle Cloud Computing Summit??????1? ????·??????????Oracle????????????????????????? ?????????·?????????????????????????????????·??????????????????????????????????????

    Read the article

  • How do you design a database to allow fast multicolumn searching?

    - by Fletcher Moore
    I am creating a real estate search from RETS data, but this is a general question. When you have a variety of columns that you would like the user to be able to filter their search result by, how do you optimize this? For example, http://www.charlestonrealestateguide.com/listings.php has 16 or so optional filters. Granted, he only has up to 11,000 entries (I have the same data), but I don't imagine the search is performed with just a giant WHERE AND AND AND ... clause. Or is this typically accomplished with one giant multicolumn index? Newegg, Amazon, and countless others also have cool & fast filtering systems for large amounts of data. How do they do it? And is there a database optimization reason for the tendency to provide ranges instead of empty inputs, or is that merely for user convenience?

    Read the article

  • SIMD Extensions for the Database Storage Engine

    - by jchang
    For the last 15 years, Intel and AMD have been progressively adding special purpose extensions to their processor architectures. The extensions mostly pertain to vector operations with Single Instruction, Multiple Data (SIMD) concept. The motivation was that achieving significant performance improvement over each successive generation for the general purpose elements had become extraordinarily difficult. On the other hand, SIMD performance could be significantly improved with special purpose registers...(read more)

    Read the article

  • Php efficiency question --> Database call vs. File Write vs. Calling C++ executable

    - by JP19
    Hi, What I wish to achieve is - log all information about each and every visit to every page ofmy website (like ip address, browser, referring page, etc). Now this is easy to do. What I am interested is doing this in a way so as to cause minimum overhead (runtime) in the php scripts. What is the best approach for this efficiency-wise: 1) Log all information to a database table 2) Write to a file (from php directly) 3) Call a C++ executable, that will write this info to a file in parallel [so the script can continue execution without waiting for the file write to occur ...... is this even possible] I may be trying to optimize unnecessarily/prematurely, but still - any thoughts / ideas on this would be appreciated. (I think efficiency of file write/logging can really be a concern if I have say 100 visits per minute...) Thanks & Regards, JP

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >