Search Results

Search found 22463 results on 899 pages for 'sub query'.

Page 11/899 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • mysql inserts & updates optimized

    - by user271619
    This is an optimization question, mostly. I have many forms on my sites that do simple Inserts and Updates. (Nothing complicated) But, several of the form's input fields are not necessary and may be left empty. (again, nothing complicated) However, my SQL query will have all columns in the Statement. My question, is it best to optimize the Inserts/Update queries appropriately? And only apply the columns that are changed into the query? We all hear that we shouldn't use the "SELECT *" query, unless it's absolutely needed for displaying all columns. But what about Inserts & Updates? Hope this makes sense. I'm sure any amount of optimization is acceptable. But I never really hear about this, specifically, from anyone.

    Read the article

  • Should I have a separate method for Update(), Insert(), etc., or have a generic Query() that would be able to handle all of these?

    - by Prayos
    I'm currently trying to write a class library for a connection to a database. Looking over it, there are several different types of queries: Select From, Update, Insert, etc. My question is, what is the best practice for writing these queries in a C# application? Should I have a separate method for each of them(i.e. Update(), Insert()), or have a generic Query() that would be able to handle all of these? Thanks for any and all help!

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to find Sub-trees in non-binary tree

    - by kenny
    I have a non-binary tree. I want to find all "sub-trees" that are connected to root. Sub-tree is a a link group of tree nodes. every group is colored in it's own color. What would be be the best approach? Run recursion down and up for every node? The data structure of every treenode is a list of children, list of parents. (the type of children and parents are treenodes) Clarification: Group defined if there is a kind of "closure" between nodes where root itself is not part of the closure. As you can see from the graph you can't travel from pink to other nodes (you CAN NOT use root). From brown node you can travel to it's child so this form another group. Finally you can travel from any cyan node to other cyan nodes so the form another group

    Read the article

  • Sub domain on root domain

    - by dror
    I have a site, actually a "portal"/ "directory" for service providers. Now, for start, we opened every service provider own page on our site, but now we get a lot of applications from those providers that thy want sites from their own. We want to make every service provider his own site, but on sub domain url. ( they don’t mind… its ok for them) So, my site is www.exaple.com There site will be: provider.exaple.com Now I have two questions: can it harm my site in SEO? if one from those sub domain , punished by Google because is owner do "black hat seo" , how it will affect the rood domain? It can make the root domain to get punished?

    Read the article

  • Crystal Reports: 3 New Uses For Sub Reports

    I hate sub reports and always consider them the last resort in any reporting solution. The negative effect on performance and maintainability is just not worth the easy ride they give the report writer. Nine times out of ten reporting requirements can be met using a little forethought and planning (and a solid understanding of formulas). With that said, there are a few novel ways of using sub reports which will not affect performance and actually prove a boon to the developer.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Crystal Reports: 3 New Uses For Sub Reports

    I hate sub reports and always consider them the last resort in any reporting solution. The negative effect on performance and maintainability is just not worth the easy ride they give the report writer. Nine times out of ten reporting requirements can be met using a little forethought and planning (and a solid understanding of formulas). With that said, there are a few novel ways of using sub reports which will not affect performance and actually prove a boon to the developer.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • OpenGL 2D Rasterization Sub-Pixel Translations

    - by Armin Ronacher
    I have a tile based 2D engine where the projection matrix is an orthographic view of the world without any scaling applied. Thus: one pixel texture is drawn on the screen in the same size. That all works well and looks nice but if the camera makes a sub-pixel movement small lines appear between the tiles. I can tell you in advance what does not fix the problem: GL_NEAREST texture interpolation GL_CLAMP_TO_EDGE What does “fix” the problem is anchoring the camera to the nearest pixel instead of doing a sub-pixel translation. I can live with that, but the camera movement becomes jerky. Any ideas how to fix that problem without resorting to the rounding trick I do currently?

    Read the article

  • Directing crawlers to content in language per language sub-domain

    - by Noam
    I have a site with multilingual website with many pages (40M). The site has UGC, and each translation is actually for the titles. Each sub-domain points to the same content with different titles per language. As far as I understand, each sub-domain should be indexed by search engines, meaning they will actually need to crawl 40M x supported-languages. So I thought it might be best to direct each subdomain crawler, to pages that are fully in that language (titles + UGC). Is there a way to do this? Should search engines understand this on their own?

    Read the article

  • Alternating links and slide down content on a list of sub sections

    - by user27291
    I have a page for a doctor's practice. In the summary page for the practice there is a list of subsections such as Women's, Men's, Children, Sport, etc. Some of these sub sections are very large, others can be a paragraph or more with a short unordered list. In terms of content volume, the large subsections warrant their own separate page, the smaller one's not so much. I created a little plugin which enables me to use the list of subsections in 2 ways. When clicking on the title of a larger section, you'll be sent through to it's own page. For the smaller sections, a slide down box will open with the information. Is this a good way to handle my information architecture? Should I be giving the smaller sub sections their own page for SEO purposes?

    Read the article

  • SOLVED: Breaking parent web.config dependencies in sub applications

    This article explains how to implement a sub application such as a blog in your website without experiencing dependency issues. A common problem that developers experience is when their sub applications accidentally inherit requirements of the parent website. This is actually by design but read on if this is causing problems in your site. Scenario This problem has caught me out a couple of times so far but usually with enough of a gap between occurrences that it had become just a fuzzy memory....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28

    - by pinaldave
    Previously, we covered the DMV sys.dm_os_wait_stats, and also saw how it can be useful to identify the major resource bottleneck. However, at the same time, we discussed that this is only useful when we are looking at an instance-level picture. Quite often we want to know about the processes going in our server at the given instant. Here is the query for the same. This DMV is written taking the following into consideration: we want to analyze the queries that are currently running or which have recently ran and their plan is still in the cache. SELECT dm_ws.wait_duration_ms, dm_ws.wait_type, dm_es.status, dm_t.TEXT, dm_qp.query_plan, dm_ws.session_ID, dm_es.cpu_time, dm_es.memory_usage, dm_es.logical_reads, dm_es.total_elapsed_time, dm_es.program_name, DB_NAME(dm_r.database_id) DatabaseName, -- Optional columns dm_ws.blocking_session_id, dm_r.wait_resource, dm_es.login_name, dm_r.command, dm_r.last_wait_type FROM sys.dm_os_waiting_tasks dm_ws INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp WHERE dm_es.is_user_process = 1 GO You can change CROSS APPLY to OUTER APPLY if you want to see all the details which are omitted because of the plan cache. Let us analyze the result of the above query and see how it can be helpful to identify the query and the kind of wait type it creates. Click to Enlarage The above query will return various columns. There are various columns that provide very important details. e.g. wait_duration_ms – it indicates current wait for the query that executes at that point of time. wait_type – it indicates the current wait type for the query text – indicates the query text query_plan – when clicked on the same, it will display the query plans There are many other important information like CPU_time, memory_usage, and logical_reads, which can be read from the query as well. In future posts on this series, we will see how once identified wait type we can attempt to reduce the same. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Discovering XML Data Type Methods

    - by pinaldave
    This blog post is inspired from SQL Interoperability Joes 2 Pros: A Guide to Integrating SQL Server with XML, C#, and PowerShell – SQL Exam Prep Series 70-433 – Volume 5. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Discovering XML Data Type Methods – A Primer. In the article we discussed various basics terminology of the XML. The article further covers following important concepts of XML. What are XML Data Type Methods The query() Method The value() Method The exist() Method The modify() Method Above five are the most important concepts related to XML and SQL Server. There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which method returns an XML fragment from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 2.) Which XML data type method returns a “1” if found and “0” if the specified XPath is not found in the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 3.) Which XML data type method allows you to pick the data type of the value that is returned from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 4.) Which method will not work with a SQL SELECT statement? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 3 3) 2 4) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Will removing unused query string parameters negatively affect SEO?

    - by trm
    Will changing links to remove query string parameters that are no longer used have any negative impact on search engine rankings? Say I have a page about.php on my site, and all of my links to this page are of the form http://www.example.com/about.php?foo=bar and I've made some changes to the script such that the parameter foo is no longer used. I would like to remove the unused parameter from the links so the URL will look cleaner, but I am concerned that this could cause problems with SEO. Is it safe to remove ?foo=bar from my links?

    Read the article

  • Linq-to-sql Compiled Query returning object NOT belonging to submitted DataContext

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Revised: It looks like compiled query is returning result from new data context and it is not using the one that I passed in compiled-query. Therefore I can not reuse returned object and link it to another object obtained from datacontext thru non compiled queries. [TestMethod] public void GetMachinesTest() { // Test Preparation (not important) using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // PASS !!!! } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // FAIL !!!! } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext.

    Read the article

  • solr JOIN query

    - by Sfairas
    I need to run a JOIN query on a solr index. I've got two xmls that I have indexed, person.xml and subject.xml. Person: <doc> <field name="id">P39126</field> <field name="family">Smith</field> <field name="given">John</field> <field name="subject">S1276</field> <field name="subject">S1312</field> </doc> Subject: <doc> <field name="id">S1276</field> <field name="topic">Abnormalities, Human</field> </doc> I need to only display information from the person doc but each query should match fields in both person and subject. In the case the query matches only the subject doc I need to display all docs from the person that have a matching id. Is this possible to do without running two seperate queries? Something like a JOIN query would do the job. Any help?

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • LINQ compiled query DataBind issue

    - by Brian
    Hello All, I have a pretty extensive reporting page that uses LINQ. There is one main function that returns an IQueryable object. I then further filter / aggregate the returned query depending on the report the user needs. I changed this function to a compiled query, it worked great, and the speed increase was astonishing. The only problem comes when i want to databind the results. I am databinding to a standard asp.net GridView and it works fine, no problems. I am also databinding to an asp.net chart control, this is where my page is throwing an error. this works well: GridView gv = new GridView(); gv.DataSource = gridSource; But this does not: Series s1 = new Series("Series One"); s1.Points.DataBindXY(gridSource, "Month", gridSource, "Success"); The error i receive is this: System.NotSupportedException Specified method is not supported When i look into my gridSource var at run time i see this using a typical linq query: SELECT [t33].[value2] AS [Year], [t33].[value22] AS [Month], [t33].[value3] AS [Calls]...... I see this after i change the query to compiled: {System.Linq.OrderedEnumerable<<>f__AnonymousType15<string,int,int,int,int,int,int,int>,string>} This is obviously the reason why the databindxy is no longer working, but i am not sure how to get around it. Any help would be appreciated! Thanks

    Read the article

  • Entity Framework query

    - by carter-boater
    Hi all, I have a piece of code that I don't know how to improve it. I have two entities: EntityP and EntityC. EntityP is the parent of EntityC. It is 1 to many relationship. EntityP has a property depending on a property of all its attached EntityC. I need to load a list of EntityP with the property set correctly. So I wrote a piece of code to get the EntityP List first.It's called entityP_List. Then as I wrote below, I loop through the entityP_List and for each of them, I query the database with a "any" function which will eventually be translated to "NOT EXIST" sql query. The reason I use this is that I don't want to load all the attached entityC from database to memory, because I only need the aggregation value of their property. But the problem here is, the looping will query the databae many times, for each EntityP! So I am wondering if anybody can help me improve the code to query the database only once to get all the EntityP.IsAll_C_Complete set, without load EntityC to memory. foreach(EntityP p in entityP_List) { isAnyNotComoplete = entities.entityC.Any(c => c.IsComplete==false && c.parent.ID == p.ID); p.IsAll_C_Complete = !isAnyNotComoplete; } Thank you very much!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >