Search Results

Search found 4714 results on 189 pages for 'unbuffered queries'.

Page 38/189 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Strange behaviour of code inside TransactionScope?

    - by Krishna
    We are facing a very complex issue in our production application. We have a WCF method which creates a complex Entity in the database with all its relation. public void InsertEntity(Entity entity) { using(TransactionScope scope = new TransactionScope()) { EntityDao.Create(entity); } } EntityDao.Create(entity) method is very complex and has huge pieces of logic. During the entire process of creation it creates several child entities and also have several queries to database. During the entire WCF request of entity creation usually Connection is maintained in a ThreadStatic variable and reused by the DAOs. Although some of the queries in DAO described in step 2 uses a new connection and closes it after use. Overall we have seen that the above process behaviour is erratic. Some of the queries in the inner DAO does not even return actual data from the database? The same query when run to the actaul data store gives correct result. What can be possible reason of this behaviour?

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • SQL Server 2005 Error 701 - out of memory

    - by Tufo
    I'm currently having following error message when executing a .sql file with about 26MB on SQL Server 2005: Msg 701, Level 17, State 123 There is insufficient system memory to run this query. I'm working with 4GB RAM, 64Bit Windows 7 Ultimate, Core2Duo T6400(2GHz)... Is there a way to execute it without receiving this message (maybe force SQL Server to use swap file?) or a way to execute it in parts (like 100 queries a time)... The file is basically a CREATE TABLE followed by thousads of INSERT queries and I have a lot of those (converted .DBF files to SQL queries using ABC DBF Converter) Any idea will be very appreciated!

    Read the article

  • Rails eager loading

    - by Dimitar Vouldjeff
    HI, I have a Test model, which has_many questions, and Question, which has_many answers... When I make a query for a Test with :include = [:questions, {:questions = :answers}] ActiveRecord makes two more queries to fetch the questions and then to fetch the answers - it doesn`t join them!!! When I do the query with :joins ActiveRecord makes the query, but later when I need the Test.questions or Test.questions.answers ActiveRecord makes again those 2 extra queries!!! And later when I enumerate the questions or answers in the log I see other queries for each object, but it has Cache tag... Is this normal?

    Read the article

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • What is best practice with SQLite and Android ?

    - by PHP_Jedi
    What is considered "best practice" when executing queries on a sql-lite db within an android app. Is it safe to run inserts, deletes and select queries from an AsyncTask's doInBackground ? Or should I use the UI Thread ? I suppose that db queries can be "heavy" and should not use the UI thread as it can lock up the app - resulting in an ANR. If I have several AsyncTasks, should they share a connection or should they open a connection each ? Any best practices in this area on android?

    Read the article

  • How to force oracle to use index range scan?

    - by wsb3383
    Hi, all. I have a series of extremely similar queries that I run against a table of 1.4 billion records (with indexes), the only problem is that at least 10% of those queries take 100x more time to execute than others. I ran an explain plan and noticed that the for the fast queries (roughly 90%) Oracle is using an index range scan (on my created), while on the slow one, it's using a full index scan. Is there a way to force Oracle to do a an index range scan? Thanks!

    Read the article

  • Problem with migrating a model in ruby

    - by Shreyas Satish
    I run script/generate model query edit query.rb in models.. class Query < ActiveRecord::Base #I even tried Migrations instead of Base def sef.up create table :queries do|t| t.string :name end end def self.down drop_table :queries end end ,run rake db:migrate. and what I see in db is this: mysql> desc queries; +------------+----------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | created_at | datetime | YES | | NULL | | | updated_at | datetime | YES | | NULL | | +------------+----------+------+-----+---------+----------------+ Where is the "name" field? HELP ! Cheers !

    Read the article

  • Increasing speed of webservice - howto

    - by Koran
    Hi, Our client-server product has the protocol between them as XML over HTTP. Here, the client asks a GET/POST query to the web server and the server responds with XML. The server is written using django. The server has to be on the web because there are many clients across the world using this. The server code uses extensive memoization and also there is very less db queries - most queries dont have any db queries, some of them has max 1. The biggest problem is the speed. Every query takes close to 5 seconds for the reply. The data replied is also very less - in the range of 4-6 Kb. What are the mechanisms to improve speed of the web service? Is this the usual way of writing a client-server? Are there other technologies and are we missing out on it? Thank you K

    Read the article

  • SQL Server 2005 Fail: Return Dates As Strings

    - by Abs
    Hello all, I am using the SQL Server PHP Driver, I think this question can be answered without knowing what this is. I have come across this many times, what does it mean by NAMES? Column names?: SET NAMES utf8 Is there a query similar to the above that will get my dates to be returned as a string? For some reason on my SQL Sever 2008 on Vista, this works: $connectionInfo = array('Database' => $dbname, 'ReturnDatesAsStrings' => true) But the above 'ReturnDatesAsStrings' does not work on my SQL Server 2005 on a windows server machine? I can't execute any queries after setting the above! Does SQL Server 2005 support ReturnDatesAsStrings? Is there some other parameter I can pass to do the same? Thanks all for any help EDIT I should of mentioned this but if there is a solution I am hoping for one that is in the form of a setting that can be set before any queries can be executed as I do not have control on what queries will be executed.

    Read the article

  • How to move from untyped DataSets to POCO\LINQ2SQL in legacy application

    - by artvolk
    Good day! I've a legacy application where data access layer consists of classes where queries are done using SqlConnection/SqlCommand and results are passed to upper layers wrapped in untyped DataSets/DataTable. Now I'm working on integrating this application into newer one where written in ASP.NET MVC 2 where LINQ2SQL is used for data access. I don't want to rewrite fancy logic of generating complex queries that are passed to SqlConnection/SqlCommand in LINQ2SQL (and don't have permission to do this), but I'd like to have result of these queries as strong-typed objects collection instead of untyped DataSets/DataTable. The basic idea is to wrap old data access code in a nice-looking from ASP.NET MVC "Model". What is the fast\easy way of doing this?

    Read the article

  • Repeatedly querying xml using python

    - by Jack
    I have some xml documents I need to run queries on. I've created some python scripts (using ElementTree) to do this, since I'm vaguely familiar with using it. The way it works is I run the scripts several times with different arguments, depending on what I want to find out. These files can be relatively large (10MB+) and so it takes rather a long time to parse them. On my system, just running: tree = ElementTree.parse(document) takes around 30 seconds, with a subsequent findall query only adding around a second to that. Seeing as the way I'm doing this requires me to repeatedly parse the file, I was wondering if there was some sort of caching mechanism I can use so that the ElementTree.parse computation can be reduced on subsequent queries. I realise the smart thing to do here may be to try and batch as many queries as possible together in the python script, but I was hoping there might be another way. Thanks.

    Read the article

  • A question about indexes regarding to the gain of inserts & updates in database

    - by Mestika
    Hi, I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index. The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2). My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide. For Table1 it might not be necessary to have an index because the SELECT queries there is not that common. I would appreciate any suggestion, tips or just good advice to what is a good solution. By the way, I’m using IBM DB2 version 9.7 as my Database system Sincerely Mestika

    Read the article

  • How to extract all IDs accessed from a mysql general log using the linux commandline?

    - by shlomoid
    This should be a trivial question for anyone who's good with bash/sed/awk. Unfortunately, I'm not, yet :) I've got a general log from MySQL which contains some queries that have a common parameter, they query on a specific id field. The queries look like update tbl set col='binary_values' where id=X; I need to process the log and extract all the IDs that these queries touched, each in it's own line. The purpose of this is to figure out how many times each ID is accessed. Eventually I'd group and count the values. The binary values are indeed binary junk, so they kinda messed up some things I've been trying to do. Eventually we solved the problem temporarily using a python script, but I'm sure the linux command line tool set can do it too. How would you do it?

    Read the article

  • NHibernate very slow during debugging

    - by HeavyWave
    Have anyone stumbled upon a problem where NHibernate is extremely slow in Visual Studio while debugging, but behaves normally when run separately? Logging is disabled and the time lost seems to be when the actual queries are executed, NHProfiler shows that queries were executed very quickly (on SQL side I presume), but each session with 10 queries takes about 4 seconds. I am using SQL Express Server. As I said, even if I turn on full logging and run my application without Visual Studio it is a magnitude faster. Update. After hours and hours of work towards the issue I was able to fix it by simply switching project type from Windows Application to Console Application (although in reality it's a Windows Service, but it always worked before with Windows Application project type). What could possibly be the difference to bring NHibernate to a halt in debugging mode?

    Read the article

  • Hibernate HQL to basic SQL

    - by CC
    Hello everybody, I working on a project with Hibernate and we need to replace Hibernate with some "home made persistence" stuff. The idea is that the project is big enough, and we have many HQL queries. The problem is with the queries like select a,b from table1, table2 on t1.table1=t2.table2 Basically all joins are not supported by our "hand made persistence" stuff. What I would need, is to be able to do some sort of transcoder, which will take as a input the HQL queries and output some SQL, but the basic SQL without joins, something like (a dumb example) select a from table1 where t1 IN ( select b from table2) I hope you get the idea. My persistence layer does not supports joins. Does anybody has any idea about something like that? Some framework, or something? Thanks alot everybody. C.C.

    Read the article

  • Writing OLAP SQL query

    - by user1859596
    I have a project I am working on that requires the following : create a normalized sample rdbms (5 tables) using Java I entered 1 million rows of data to each table run two OLTP and two OLAP queries on the normalized tables. Denormalized tables. run the same OLTP and OLAP queries on them and compare time. What does OLAP query mean? I've searched the internet and all that I can find is that I have to make a cube, and apply queries on it. How can I write an OLAP query on a RDBMS? I have a sample : tables normalized(orders,product,customer,branch,sales) sales : order_id,product_id,quantity product : product_id,name,description,price,sales_tax customer : customer_id,f_name,l_name,tel_no,addr,nic,city branch : branch_id,name,tel_no,addr,city orders : order_id,customer_id,order_date,branch_id I want to write an OLAP query on the above tables. I am using Oracle Express with SQL Developer.

    Read the article

  • [Symfony] Admin generator and i18n

    - by David
    I have read lots of questions about i18n, but I haven't found any about performance. I have a simple backend app listing the contents of an ads table. These ads have a category, that is translated in some languages (it's defined as i18n in the Doctrine schema). So, when I add a "table_method" in my generator.yml to include de Category table, it reduces the number of queries, but there are yet some of them referencing i18n translation tables. So, if I add the category Translation table to the query, it reduces even more the queries BUT it increases the processing time considerably. Why this time penalty? Just because of the translation table? And why isn't the filter using this method to avoid so many translation queries as well? I mean, if I want to filter by category, it is making one query per category to include the translation table. Why??

    Read the article

  • web page db query optimisation

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of Db hits. I dont want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimising at all. Currently the page requires 18 (thats right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (mySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page work, its simply a matter of optimization. My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens?

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • Can you use POST to run a query in Solr (/select)

    - by RyanFetz
    I have queries that I am running against out solr index that sometimes have very long query parameters, I get errors when i run these queries, which i assume are do to the limit of a GET query parameters. Here is the method I use to query (JSON), this is to show I am using the Http Extensions (the client i use is a thin wrapper for HttpClient) not an end to end solution. 90% of the queries run fine, it is just when the params are large i get the 500 error from solr. I have read somewhere you can use POSt's when doing the select command but have not found examples of how to do it. Any help would be fantastic! public string GetJson(HttpQueryString qs) { using (var client = new DAC.US.Web.XmlHttpServiceClient(this.Uri)) { client.Client.DefaultHeaders.Authorization = new Microsoft.Http.Headers.Credential("Basic", DAC.US.Encryption.Hash.WebServiceCredintials); qs.Add("wt", "json"); if (!String.IsNullOrEmpty(this.Version)) qs.Add("version", this.Version); using (var response = client.Get(new Uri(@"select/", UriKind.Relative), qs)) { return response.Content.ReadAsString(); } } }

    Read the article

  • How can I figure out where all these extra sqlite3 selects are being generated in my rails app?

    - by radixhound
    I'm trying to figure out where a whole pile of extra queries are being generated by my rails app. I need some ideas on how to tackle it. Or, if someone can give me some hints, I'd be grateful. I get these: SQL (1.0ms) SELECT name FROM sqlite_master WHERE type = 'table' AND NOT name = 'sqlite_sequence' SQL (0.8ms) SELECT name FROM sqlite_master WHERE type = 'table' AND NOT name = 'sqlite_sequence' SQL (0.8ms) SELECT name FROM sqlite_master WHERE type = 'table' AND NOT name = 'sqlite_sequence' repeated over and over on every request to the DB (as much as 70 times for a single request) I tried installing a plugin that traced the source of the queries, but it really didn't help at all. I'm using the hobofields gem, dunno if that is what's doing it but I'm somewhat wedded to it at the moment Any tips on hunting down the source of these extra queries?

    Read the article

  • Hibernate HQL to basic SQL (no joins)

    - by CC
    Hello everybody, I working on a project with Hibernate and we need to replace Hibernate with some "home made persistence" stuff. The idea is that the project is big enough, and we have many HQL queries. The problem is with the queries like select a,b from table1, table2 on t1.table1=t2.table2 Basically all joins are not supported by our "hand made persistence" stuff. What I would need, is to be able to do some sort of transcoder, which will take as a input the HQL queries and output some SQL, but SQL without joins I hope you get the idea. My persistence layer does not supports joins. Does anybody has any idea about something like that? Some framework, or something? Thanks alot everybody. C.C.

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >