Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 67/285 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to use function to connect to database and how to work with queries?

    - by Abhilash Shukla
    I am using functions to work with database.. Now the way i have defined the functions are as follows:- /** * Database definations */ define ('db_type', 'MYSQL'); define ('db_host', 'localhost'); define ('db_port', '3306'); define ('db_name', 'database'); define ('db_user', 'root'); define ('db_pass', 'password'); define ('db_table_prefix', ''); /** * Database Connect */ function db_connect($host = db_host, $port = db_port, $username = db_user, $password = db_pass, $database = db_name) { if(!$db = @mysql_connect($host.':'.$port, $username, $password)) { return FALSE; } if((strlen($database) > 0) AND (!@mysql_select_db($database, $db))) { return FALSE; } // set the correct charset encoding mysql_query('SET NAMES \'utf8\''); mysql_query('SET CHARACTER_SET \'utf8\''); return $db; } /** * Database Close */ function db_close($identifier) { return mysql_close($identifier); } /** * Database Query */ function db_query($query, $identifier) { return mysql_query($query, $identifier); } Now i want to know whether it is a good way to do this or not..... Also, while database connect i am using $host = db_host Is it ok? Secondly how i can use these functions, these all code is in my FUNCTIONS.php The Database Definitions and also the Database Connect... will it do the needful for me... Using these functions how will i be able to connect to database and using the query function... how will i able to execute a query? VERY IMPORTANT: How can i make mysql to mysqli, is it can be done by just adding an 'i' to mysql....Like:- @mysql_connect @mysqli_connect

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Extending URIs with 2 queries (i.e. 'viewauthorbooks.php?authorid=4' AND 'orderby=returndate") Possi

    - by Jess
    I have a link in my system as displayed above; 'viewauthorbooks.php?authorid=4' which works fine and generates a page displaying the books only associated with the particular author. However I am implementing another feature where the user can sort the columns (return date, book name etc) and I am using the ORDER BY SQL clause. I have this also working as required for other pages, which do not already have another query in the URI. But for this particular page there is already a paramter returned in the URL, and I am having difficulty in extending it. When the user clicks on the a table column title I'm getting an error, and the original author ID is being lost!! This is the URI link I am trying to use: <th><a href="viewauthorbooks.php?authorid=<?php echo $row['authorid']?>&orderby=returndate">Return Date</a></th> This is so that the data can be sorted in order of Return Date. When I run this; the author ID gets lost for some reason, also I want to know if I am using correct layout to have 2 parameters run in the address? Thanks.

    Read the article

  • How to handle pagination queries properly with mongodb and php?

    - by luckytaxi
    Am I doing this right? I went to look at some old PHP code w/ MySQL and I've managed to get it to work, however I'm wondering if there's a much "cleaner" and "faster" way of accomplishing this. First I would need to get the total number of "documents" $total_documents = $collection->find(array("tags" => $tag, "seeking" => $this->session->userdata('gender'), "gender" => $this->session->userdata('seeking')))->count(); $skip = (int)($docs_per_page * ($page - 1)); $limit = $docs_per_page; $total_pages = ceil($total_documents / $limit); // Query to populate array so I can display with pagination $data['result'] = $collection->find(array("tags" => $tag, "seeking" => $this->session->userdata('gender'), "gender" => $this->session->userdata('seeking')))->limit($limit)->skip($skip)->sort(array("_id" => -1)); My question is, can I run the query in one shot? I'm basically running the same query twice, except the second time I'm passing the value to skip between records.

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • How do I create queries to SQL Server tables via Visual Studio when no knowledge about SQL nor Linq?

    - by Kent S. Clarkson
    Let´s be frank, my knowledge regarding SQL language is very low. Nevertheless, my boss gave me the task to build a database application using the following tools: SQL Server and Visual Studio 2008; C#. I use the VS DataSet as a local mirror of the SQL Server. And let´s be frank again, my understanding of the VS Query builder is also very small, I´m finding it quite confusing, actually. So no help to find from Query builder. And my knowledge of Linq is even lower... Perhaps I should mention that the deadline for the project is "aggressively" set, so I have no chance to learn enough about these things during the project. And I´m a bit stupid too, which is no help when it comes to challenges like this (on other occations it might be quite useful though) With these permissions, what should I do (except for killing myself or retire) to be able to query my tables in a sufficient way?

    Read the article

  • Should I use two queries, or is there a way to JOIN this in MySQL/PHP?

    - by Jack W-H
    Morning y'all! Basically, I'm using a table to store my main data - called 'Code' - a table called 'Tags' to store the tags for each code entry, and a table called 'code_tags' to intersect it. There's also a table called 'users' which stores information about the users who submitted each bit of code. On my homepage, I want 5 results returned from the database. Each returned result needs to list the code's title, summary, and then fetch the author's firstname based on the ID of the person who submitted it. I've managed to achieve this much so far (woot!). My problem lies when I try to collect all the tags as well. At the moment this is a pretty big query and it's scaring me a little. Here's my problematic query: SELECT code.*, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id ORDER BY date DESC LIMIT 0, 5 What it returns is correct looking data, but several repeated rows for each tag. So for example if a Code entry has 3 tags, it will return an identical row 3 times - except in each of the three returned rows, the tag changes. Does that make sense? How would I go about changing this? Thanks! Jack

    Read the article

  • CAML queries: how to filter folders from result set?

    - by drax
    Hi all, I'm using caml query to select all documents which were modified or added by user. Query runs recursively on all subsites of specified site collection. Now problem is I can't get rid of folders which are also part of result set. For now I'm filtering them from result datatable. But I'm wondering: Is it possible to filter out folders from result set just by using caml?

    Read the article

  • How to include many "sub"-queries in a SQL statement to generate file paths for images?

    - by Zachary
    Greetings, I have three fields in legacy MySQL database/application. image_type image_of_bush image_prefix I need to extract the data into a full image file path, into a .CSV file, where each combination (mentioned below) is a column. Can it all be done in SQL? Or can you recommend a better way? Currently using PHP to display the combinations on the product page. this is also part of a larger query, which is extracting data from an OS Commerce mySQL database. CASE ONE One Horizontal Image image_type = "Horizontal Image" image_of_bush = "No Image of Bush" IMAGE NAME: image_prefix + _s + .jpg (Example: Albertine_s.jpg) CASE TWO One Vertical Image image_type = "Vertical Image" image_of_bush = "No Image of Bush" IMAGE NAME: image_prefix + _v + .jpg (Example: Albertine_v.jpg) CASE THREE Two Horizontal Images image_type = "Horizontal Image" image_of_bush = "Horizontal Image of Bush" FIRST IMAGE NAME: image_prefix + _s + .jpg (Example: Albertine_s.jpg) SECOND IMAGE NAME: image_prefix + _bs + .jpg (Example: Albertine_bs.jpg) CASE FOUR Two Vertical Images image_type = "Vertical Image" image_of_bush = "Vertical Image of Bush" FIRST IMAGE NAME: image_prefix + _v + .jpg (Example: Albertine_v.jpg) SECOND IMAGE NAME: image_prefix + _bv + .jpg (Example: Albertine_bv.jpg) CASE FOUR One Horizontal and One Vertical Image image_type = "Horizontal Image" image_of_bush = "Vertical Image of Bush" FIRST IMAGE NAME: image_prefix + _s + .jpg (Example: Albertine_s.jpg) SECOND IMAGE NAME: image_prefix + _bv + .jpg (Example: Albertine_bv.jpg) CASE FIVE One Vertical and One Horizontal Image image_type = "Vertical Image" image_of_bush = "Horizontal Image of Bush" FIRST IMAGE NAME: image_prefix + _v + .jpg (Example: Albertine_v.jpg) SECOND IMAGE NAME: image_prefix + _bs + .jpg (Example: Albertine_bs.jpg)

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • How do I combine these similar linq queries into one?

    - by MikeD
    This is probably a basic LINQ question. I have need to select one object and if it is null select another. I'm using linq to objects in the following way, that I know can be done quicker, better, cleaner... public Attrib DetermineAttribution(Data data) { var one = from c in data.actions where c.actionType == Action.ActionTypeOne select new Attrib { id = c.id, name = c.name }; if( one.Count() > 0) return one.First(); var two = from c in data.actions where c.actionType == Action.ActionTypeTwo select new Attrib { id = c.id, name = c.name }; if (two.Count() > 0 ) return two.First(); } The two linq operations differ only on the where clause and I know there is a way to combine them. Any thoughts would be appreciated.

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • How to properly encode "[" and "]" in queries using Apache HttpClient?

    - by Jason Nichols
    I've got a GET method that looks like the following: GetMethod method = new GetMethod("http://host/path/?key=[\"item\",\"item\"]"); Such a path works just fine when typed directly into a browser, but the above line when run causes an IllegalArgumentException : Invalid URI. I've looked at using the URIUtils class, but without success. Is there a way to automatically encode this (or to add a query string onto the URL without causing HttpClient to barf?).

    Read the article

  • How to reduce MDX code redundancy in SQL Server Analysis Services (SSAS)

    To query an Analysis Services cube, MDX is used as the query language. In most business settings, one would find a set of queries that are common across a number of user query requirements. To cater to this, even with a modest size IT team, there is a good chance that the same queries are developed redundantly either within a SSAS MDX script or repetitively in an ad-hoc manner in client applications. In this tip we would look at how to reuse queries without redeveloping them over and over.

    Read the article

  • Are ternary operators not valid for linq-to-sql queries?

    - by KallDrexx
    I am trying to display a nullable date time in my JSON response. In my MVC Controller I am running the following query: var requests = (from r in _context.TestRequests where r.scheduled_time == null && r.TestRequestRuns.Count > 0 select new { id = r.id, name = r.name, start = DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = r.TestRequestRuns.First().end_dt.HasValue ? DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value) : string.Empty }); When I run requests.ToArray() I get the following exception: Could not translate expression ' Table(TestRequest) .Where(r => ((r.scheduled_time == null) AndAlso (r.TestRequestRuns.Count > 0))) .Select(r => new <>f__AnonymousType18`4(id = r.id, name = r.name, start = value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = IIF(r.TestRequestRuns.First().end_dt.HasValue, value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value), Invoke(value(System.Func`1[System.String])))))' into SQL and could not treat it as a local expression. If I comment out the end = line, everything seems to run correctly, so it doesn't seem to be the use of my local DateAndTimeDisplayString method, so the only thing I can think of is Linq to Sql doesn't like Ternary operators? I think I've used ternary operators before, but I can't remember if I did it in this code base or another code base (that uses EF4 instead of L2S). Is this true, or am I missing some other issue?

    Read the article

  • Is there a better way to do SELECT queries in MySQL and sort them in PHP than this way?

    - by Kent
    I am just learning PHP/MySQL, one this I am having to do a lot is displaying data that was previously inserted into the database out to the user's browser. So I am doing this: $select = mysql_query('SELECT * FROM pages'); while ($return = mysql_fetch_assoc($select)) { $title = $return['title']; $author = $return['author']; $content = $return['content']; } then I can use these variables through out the page. Now, doing it the above way isn't an issue when I only have 3 columns in a database but what if I am dealing with a huge database with many more columns. I have a nagging feeling that the pros do it in some more efficient way where they maybe loop through the table they are selecting from to find all columns it has and associate them with variables automatically. Is that the case? or is the above how you guys do it too?

    Read the article

  • Tracking 502 bad gateway error

    - by dasickle
    I moved my Wordpress site to WP Engine and now I constantly get 502 errors. I spoke with support and they said that its because I have a lot of DB queries. I ran some tests and my frontpage only has 95 queries and page size is about 500kb. Most inner pages are around 60 queries. All queries are very short. Some people tell me its common with WP Engine because they run nginx. Why do I keep getting these errors and is there a way to track how many of them happen on daily basis? P.S. WP Engine log is empty so cant see the 502's there.

    Read the article

  • Search For a Query in RDL Files with PowerShell

    - by AllenMWhite
    In tracking down poorly performing queries for clients I often encounter the query text in a trace file I've captured, but don't know the source of the query. I've found that many of the poorest performing queries are those written into the reports the business users need to make their decisions. If I can't figure out where they came from, usually years after the queries were written, I can't fix them. First thing I did was find a great utility called RSScripter , which opens up a Windows dialog...(read more)

    Read the article

  • Is it OK to try to use Plinq in all Linq queries?

    - by Tony_Henrich
    I read that PLinq will automatically use non parallel Linq if it finds PLinq to be more expensive. So I figured then why not use PLinq for everything (when possible) and let the runtime decide which one to use. The apps will be deployed to multicore servers and I am OK to develop a little more code to deal with parallelism. What are the pitfalls of using plinq as a default?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >